00:00:00.001 Started by upstream project "autotest-per-patch" build number 132520 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.105 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.106 The recommended git tool is: git 00:00:00.107 using credential 00000000-0000-0000-0000-000000000002 00:00:00.109 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.145 Fetching changes from the remote Git repository 00:00:00.147 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.177 Using shallow fetch with depth 1 00:00:00.177 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.177 > git --version # timeout=10 00:00:00.207 > git --version # 'git version 2.39.2' 00:00:00.207 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.230 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.230 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.997 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.009 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.022 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.022 > git config core.sparsecheckout # timeout=10 00:00:07.034 > git read-tree -mu HEAD # timeout=10 00:00:07.052 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.085 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.085 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.191 [Pipeline] Start of Pipeline 00:00:07.205 [Pipeline] library 00:00:07.207 Loading library shm_lib@master 00:00:07.207 Library shm_lib@master is cached. Copying from home. 00:00:07.222 [Pipeline] node 00:00:07.230 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.232 [Pipeline] { 00:00:07.240 [Pipeline] catchError 00:00:07.241 [Pipeline] { 00:00:07.252 [Pipeline] wrap 00:00:07.260 [Pipeline] { 00:00:07.267 [Pipeline] stage 00:00:07.269 [Pipeline] { (Prologue) 00:00:07.498 [Pipeline] sh 00:00:07.778 + logger -p user.info -t JENKINS-CI 00:00:07.791 [Pipeline] echo 00:00:07.792 Node: WFP8 00:00:07.798 [Pipeline] sh 00:00:08.093 [Pipeline] setCustomBuildProperty 00:00:08.105 [Pipeline] echo 00:00:08.107 Cleanup processes 00:00:08.113 [Pipeline] sh 00:00:08.401 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.401 463932 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.415 [Pipeline] sh 00:00:08.701 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.701 ++ grep -v 'sudo pgrep' 00:00:08.701 ++ awk '{print $1}' 00:00:08.701 + sudo kill -9 00:00:08.701 + true 00:00:08.716 [Pipeline] cleanWs 00:00:08.727 [WS-CLEANUP] Deleting project workspace... 00:00:08.727 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.733 [WS-CLEANUP] done 00:00:08.738 [Pipeline] setCustomBuildProperty 00:00:08.755 [Pipeline] sh 00:00:09.034 + sudo git config --global --replace-all safe.directory '*' 00:00:09.113 [Pipeline] httpRequest 00:00:09.491 [Pipeline] echo 00:00:09.493 Sorcerer 10.211.164.20 is alive 00:00:09.500 [Pipeline] retry 00:00:09.501 [Pipeline] { 00:00:09.510 [Pipeline] httpRequest 00:00:09.514 HttpMethod: GET 00:00:09.514 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.514 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.527 Response Code: HTTP/1.1 200 OK 00:00:09.527 Success: Status code 200 is in the accepted range: 200,404 00:00:09.527 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.569 [Pipeline] } 00:00:12.582 [Pipeline] // retry 00:00:12.588 [Pipeline] sh 00:00:12.863 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.878 [Pipeline] httpRequest 00:00:13.264 [Pipeline] echo 00:00:13.266 Sorcerer 10.211.164.20 is alive 00:00:13.277 [Pipeline] retry 00:00:13.280 [Pipeline] { 00:00:13.297 [Pipeline] httpRequest 00:00:13.302 HttpMethod: GET 00:00:13.302 URL: http://10.211.164.20/packages/spdk_9c7e54d6220eceb721cb093570a81aa80dff2f55.tar.gz 00:00:13.302 Sending request to url: http://10.211.164.20/packages/spdk_9c7e54d6220eceb721cb093570a81aa80dff2f55.tar.gz 00:00:13.324 Response Code: HTTP/1.1 200 OK 00:00:13.324 Success: Status code 200 is in the accepted range: 200,404 00:00:13.325 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_9c7e54d6220eceb721cb093570a81aa80dff2f55.tar.gz 00:03:07.716 [Pipeline] } 00:03:07.731 [Pipeline] // retry 00:03:07.738 [Pipeline] sh 00:03:08.021 + tar --no-same-owner -xf spdk_9c7e54d6220eceb721cb093570a81aa80dff2f55.tar.gz 00:03:10.575 [Pipeline] sh 00:03:10.860 + git -C spdk log --oneline -n5 00:03:10.860 9c7e54d62 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:03:10.860 9ebbe7008 blob: fix possible memory leak in bs loading 00:03:10.860 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:03:10.860 9885e1d29 lib/blob: cluster_sz must be a multiple of PAGE 00:03:10.860 9a6847636 bdev/nvme: Fix spdk_bdev_nvme_create() 00:03:10.872 [Pipeline] } 00:03:10.886 [Pipeline] // stage 00:03:10.895 [Pipeline] stage 00:03:10.897 [Pipeline] { (Prepare) 00:03:10.914 [Pipeline] writeFile 00:03:10.931 [Pipeline] sh 00:03:11.214 + logger -p user.info -t JENKINS-CI 00:03:11.224 [Pipeline] sh 00:03:11.501 + logger -p user.info -t JENKINS-CI 00:03:11.512 [Pipeline] sh 00:03:11.795 + cat autorun-spdk.conf 00:03:11.795 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:11.795 SPDK_TEST_NVMF=1 00:03:11.795 SPDK_TEST_NVME_CLI=1 00:03:11.795 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:11.795 SPDK_TEST_NVMF_NICS=e810 00:03:11.795 SPDK_TEST_VFIOUSER=1 00:03:11.795 SPDK_RUN_UBSAN=1 00:03:11.795 NET_TYPE=phy 00:03:11.803 RUN_NIGHTLY=0 00:03:11.807 [Pipeline] readFile 00:03:11.831 [Pipeline] withEnv 00:03:11.833 [Pipeline] { 00:03:11.844 [Pipeline] sh 00:03:12.128 + set -ex 00:03:12.128 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:12.128 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:12.128 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:12.128 ++ SPDK_TEST_NVMF=1 00:03:12.128 ++ SPDK_TEST_NVME_CLI=1 00:03:12.128 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:12.128 ++ SPDK_TEST_NVMF_NICS=e810 00:03:12.128 ++ SPDK_TEST_VFIOUSER=1 00:03:12.128 ++ SPDK_RUN_UBSAN=1 00:03:12.128 ++ NET_TYPE=phy 00:03:12.128 ++ RUN_NIGHTLY=0 00:03:12.128 + case $SPDK_TEST_NVMF_NICS in 00:03:12.128 + DRIVERS=ice 00:03:12.128 + [[ tcp == \r\d\m\a ]] 00:03:12.128 + [[ -n ice ]] 00:03:12.128 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:12.128 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:15.419 rmmod: ERROR: Module irdma is not currently loaded 00:03:15.419 rmmod: ERROR: Module i40iw is not currently loaded 00:03:15.419 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:15.419 + true 00:03:15.419 + for D in $DRIVERS 00:03:15.420 + sudo modprobe ice 00:03:15.420 + exit 0 00:03:15.428 [Pipeline] } 00:03:15.506 [Pipeline] // withEnv 00:03:15.512 [Pipeline] } 00:03:15.528 [Pipeline] // stage 00:03:15.539 [Pipeline] catchError 00:03:15.541 [Pipeline] { 00:03:15.554 [Pipeline] timeout 00:03:15.554 Timeout set to expire in 1 hr 0 min 00:03:15.555 [Pipeline] { 00:03:15.565 [Pipeline] stage 00:03:15.566 [Pipeline] { (Tests) 00:03:15.577 [Pipeline] sh 00:03:15.860 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:15.860 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:15.860 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:15.860 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:15.860 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:15.860 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:15.860 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:15.860 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:15.860 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:15.860 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:15.860 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:15.860 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:15.860 + source /etc/os-release 00:03:15.860 ++ NAME='Fedora Linux' 00:03:15.860 ++ VERSION='39 (Cloud Edition)' 00:03:15.860 ++ ID=fedora 00:03:15.861 ++ VERSION_ID=39 00:03:15.861 ++ VERSION_CODENAME= 00:03:15.861 ++ PLATFORM_ID=platform:f39 00:03:15.861 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:15.861 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:15.861 ++ LOGO=fedora-logo-icon 00:03:15.861 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:15.861 ++ HOME_URL=https://fedoraproject.org/ 00:03:15.861 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:15.861 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:15.861 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:15.861 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:15.861 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:15.861 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:15.861 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:15.861 ++ SUPPORT_END=2024-11-12 00:03:15.861 ++ VARIANT='Cloud Edition' 00:03:15.861 ++ VARIANT_ID=cloud 00:03:15.861 + uname -a 00:03:15.861 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:15.861 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:18.395 Hugepages 00:03:18.395 node hugesize free / total 00:03:18.395 node0 1048576kB 0 / 0 00:03:18.395 node0 2048kB 1024 / 1024 00:03:18.395 node1 1048576kB 0 / 0 00:03:18.395 node1 2048kB 1024 / 1024 00:03:18.395 00:03:18.396 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:18.396 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:18.396 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:18.396 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:18.396 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:18.396 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:18.396 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:18.396 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:18.396 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:18.396 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:18.396 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:18.396 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:18.396 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:18.396 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:18.396 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:18.396 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:18.396 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:18.396 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:18.396 + rm -f /tmp/spdk-ld-path 00:03:18.396 + source autorun-spdk.conf 00:03:18.396 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:18.396 ++ SPDK_TEST_NVMF=1 00:03:18.396 ++ SPDK_TEST_NVME_CLI=1 00:03:18.396 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:18.396 ++ SPDK_TEST_NVMF_NICS=e810 00:03:18.396 ++ SPDK_TEST_VFIOUSER=1 00:03:18.396 ++ SPDK_RUN_UBSAN=1 00:03:18.396 ++ NET_TYPE=phy 00:03:18.396 ++ RUN_NIGHTLY=0 00:03:18.396 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:18.396 + [[ -n '' ]] 00:03:18.396 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:18.396 + for M in /var/spdk/build-*-manifest.txt 00:03:18.396 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:18.396 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:18.396 + for M in /var/spdk/build-*-manifest.txt 00:03:18.396 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:18.396 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:18.396 + for M in /var/spdk/build-*-manifest.txt 00:03:18.396 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:18.396 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:18.396 ++ uname 00:03:18.396 + [[ Linux == \L\i\n\u\x ]] 00:03:18.396 + sudo dmesg -T 00:03:18.655 + sudo dmesg --clear 00:03:18.655 + dmesg_pid=465403 00:03:18.655 + [[ Fedora Linux == FreeBSD ]] 00:03:18.655 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:18.655 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:18.655 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:18.655 + [[ -x /usr/src/fio-static/fio ]] 00:03:18.655 + export FIO_BIN=/usr/src/fio-static/fio 00:03:18.655 + FIO_BIN=/usr/src/fio-static/fio 00:03:18.655 + sudo dmesg -Tw 00:03:18.655 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:18.655 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:18.655 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:18.655 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:18.655 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:18.655 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:18.655 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:18.655 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:18.655 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:18.655 07:12:46 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:18.655 07:12:46 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:18.655 07:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:18.655 07:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:18.655 07:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:18.655 07:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:18.655 07:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:18.655 07:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:18.655 07:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:18.655 07:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:18.655 07:12:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:18.655 07:12:46 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:18.655 07:12:46 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:18.655 07:12:46 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:18.655 07:12:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:18.656 07:12:46 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:18.656 07:12:46 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:18.656 07:12:46 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:18.656 07:12:46 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:18.656 07:12:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.656 07:12:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.656 07:12:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.656 07:12:46 -- paths/export.sh@5 -- $ export PATH 00:03:18.656 07:12:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:18.656 07:12:46 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:18.656 07:12:46 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:18.656 07:12:46 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732601566.XXXXXX 00:03:18.656 07:12:46 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732601566.E8nk5i 00:03:18.656 07:12:46 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:18.656 07:12:46 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:18.656 07:12:46 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:18.656 07:12:46 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:18.656 07:12:46 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:18.656 07:12:46 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:18.656 07:12:46 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:18.656 07:12:46 -- common/autotest_common.sh@10 -- $ set +x 00:03:18.656 07:12:46 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:18.656 07:12:46 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:18.656 07:12:46 -- pm/common@17 -- $ local monitor 00:03:18.656 07:12:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.656 07:12:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.656 07:12:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.656 07:12:46 -- pm/common@21 -- $ date +%s 00:03:18.656 07:12:46 -- pm/common@21 -- $ date +%s 00:03:18.656 07:12:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.656 07:12:46 -- pm/common@21 -- $ date +%s 00:03:18.656 07:12:46 -- pm/common@25 -- $ sleep 1 00:03:18.656 07:12:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601566 00:03:18.656 07:12:46 -- pm/common@21 -- $ date +%s 00:03:18.656 07:12:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601566 00:03:18.656 07:12:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601566 00:03:18.656 07:12:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601566 00:03:18.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601566_collect-vmstat.pm.log 00:03:18.915 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601566_collect-cpu-temp.pm.log 00:03:18.916 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601566_collect-cpu-load.pm.log 00:03:18.916 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601566_collect-bmc-pm.bmc.pm.log 00:03:19.854 07:12:47 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:19.854 07:12:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:19.854 07:12:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:19.854 07:12:47 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:19.854 07:12:47 -- spdk/autobuild.sh@16 -- $ date -u 00:03:19.854 Tue Nov 26 06:12:47 AM UTC 2024 00:03:19.854 07:12:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:19.854 v25.01-pre-238-g9c7e54d62 00:03:19.854 07:12:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:19.854 07:12:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:19.854 07:12:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:19.854 07:12:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:19.854 07:12:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:19.854 07:12:47 -- common/autotest_common.sh@10 -- $ set +x 00:03:19.854 ************************************ 00:03:19.854 START TEST ubsan 00:03:19.854 ************************************ 00:03:19.854 07:12:47 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:19.854 using ubsan 00:03:19.854 00:03:19.854 real 0m0.000s 00:03:19.854 user 0m0.000s 00:03:19.854 sys 0m0.000s 00:03:19.854 07:12:47 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:19.854 07:12:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:19.854 ************************************ 00:03:19.854 END TEST ubsan 00:03:19.854 ************************************ 00:03:19.854 07:12:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:19.854 07:12:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:19.854 07:12:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:19.854 07:12:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:19.854 07:12:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:19.854 07:12:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:19.854 07:12:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:19.854 07:12:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:19.854 07:12:47 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:20.114 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:20.114 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:20.374 Using 'verbs' RDMA provider 00:03:33.153 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:45.361 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:45.361 Creating mk/config.mk...done. 00:03:45.361 Creating mk/cc.flags.mk...done. 00:03:45.361 Type 'make' to build. 00:03:45.361 07:13:12 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:45.361 07:13:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:45.361 07:13:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:45.361 07:13:12 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.361 ************************************ 00:03:45.361 START TEST make 00:03:45.361 ************************************ 00:03:45.361 07:13:12 make -- common/autotest_common.sh@1129 -- $ make -j96 00:03:45.361 make[1]: Nothing to be done for 'all'. 00:03:45.931 The Meson build system 00:03:45.931 Version: 1.5.0 00:03:45.931 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:45.931 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:45.931 Build type: native build 00:03:45.931 Project name: libvfio-user 00:03:45.931 Project version: 0.0.1 00:03:45.931 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:45.931 C linker for the host machine: cc ld.bfd 2.40-14 00:03:45.931 Host machine cpu family: x86_64 00:03:45.931 Host machine cpu: x86_64 00:03:45.931 Run-time dependency threads found: YES 00:03:45.931 Library dl found: YES 00:03:45.931 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:45.931 Run-time dependency json-c found: YES 0.17 00:03:45.931 Run-time dependency cmocka found: YES 1.1.7 00:03:45.931 Program pytest-3 found: NO 00:03:45.931 Program flake8 found: NO 00:03:45.931 Program misspell-fixer found: NO 00:03:45.931 Program restructuredtext-lint found: NO 00:03:45.931 Program valgrind found: YES (/usr/bin/valgrind) 00:03:45.931 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:45.931 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:45.931 Compiler for C supports arguments -Wwrite-strings: YES 00:03:45.931 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:45.931 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:45.931 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:45.931 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:45.931 Build targets in project: 8 00:03:45.931 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:45.931 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:45.931 00:03:45.931 libvfio-user 0.0.1 00:03:45.931 00:03:45.931 User defined options 00:03:45.931 buildtype : debug 00:03:45.931 default_library: shared 00:03:45.931 libdir : /usr/local/lib 00:03:45.931 00:03:45.931 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:46.497 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:46.497 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:46.497 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:46.497 [3/37] Compiling C object samples/null.p/null.c.o 00:03:46.497 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:46.497 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:46.497 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:46.497 [7/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:46.497 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:46.497 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:46.497 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:46.497 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:46.497 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:46.497 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:46.497 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:46.497 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:46.497 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:46.497 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:46.754 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:46.754 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:46.754 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:46.754 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:46.754 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:46.754 [23/37] Compiling C object samples/client.p/client.c.o 00:03:46.754 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:46.754 [25/37] Compiling C object samples/server.p/server.c.o 00:03:46.754 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:46.754 [27/37] Linking target samples/client 00:03:46.754 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:46.754 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:46.754 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:46.754 [31/37] Linking target test/unit_tests 00:03:47.013 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:47.013 [33/37] Linking target samples/null 00:03:47.013 [34/37] Linking target samples/gpio-pci-idio-16 00:03:47.013 [35/37] Linking target samples/lspci 00:03:47.013 [36/37] Linking target samples/server 00:03:47.013 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:47.013 INFO: autodetecting backend as ninja 00:03:47.013 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:47.013 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:47.272 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:47.272 ninja: no work to do. 00:03:52.547 The Meson build system 00:03:52.547 Version: 1.5.0 00:03:52.547 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:52.547 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:52.547 Build type: native build 00:03:52.547 Program cat found: YES (/usr/bin/cat) 00:03:52.547 Project name: DPDK 00:03:52.547 Project version: 24.03.0 00:03:52.547 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:52.547 C linker for the host machine: cc ld.bfd 2.40-14 00:03:52.547 Host machine cpu family: x86_64 00:03:52.547 Host machine cpu: x86_64 00:03:52.547 Message: ## Building in Developer Mode ## 00:03:52.547 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:52.547 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:52.547 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:52.547 Program python3 found: YES (/usr/bin/python3) 00:03:52.547 Program cat found: YES (/usr/bin/cat) 00:03:52.547 Compiler for C supports arguments -march=native: YES 00:03:52.547 Checking for size of "void *" : 8 00:03:52.547 Checking for size of "void *" : 8 (cached) 00:03:52.547 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:52.547 Library m found: YES 00:03:52.547 Library numa found: YES 00:03:52.547 Has header "numaif.h" : YES 00:03:52.547 Library fdt found: NO 00:03:52.547 Library execinfo found: NO 00:03:52.547 Has header "execinfo.h" : YES 00:03:52.547 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:52.547 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:52.547 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:52.547 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:52.547 Run-time dependency openssl found: YES 3.1.1 00:03:52.547 Run-time dependency libpcap found: YES 1.10.4 00:03:52.547 Has header "pcap.h" with dependency libpcap: YES 00:03:52.547 Compiler for C supports arguments -Wcast-qual: YES 00:03:52.547 Compiler for C supports arguments -Wdeprecated: YES 00:03:52.547 Compiler for C supports arguments -Wformat: YES 00:03:52.547 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:52.547 Compiler for C supports arguments -Wformat-security: NO 00:03:52.547 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:52.547 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:52.547 Compiler for C supports arguments -Wnested-externs: YES 00:03:52.547 Compiler for C supports arguments -Wold-style-definition: YES 00:03:52.547 Compiler for C supports arguments -Wpointer-arith: YES 00:03:52.547 Compiler for C supports arguments -Wsign-compare: YES 00:03:52.547 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:52.547 Compiler for C supports arguments -Wundef: YES 00:03:52.547 Compiler for C supports arguments -Wwrite-strings: YES 00:03:52.547 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:52.547 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:52.547 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:52.547 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:52.547 Program objdump found: YES (/usr/bin/objdump) 00:03:52.547 Compiler for C supports arguments -mavx512f: YES 00:03:52.548 Checking if "AVX512 checking" compiles: YES 00:03:52.548 Fetching value of define "__SSE4_2__" : 1 00:03:52.548 Fetching value of define "__AES__" : 1 00:03:52.548 Fetching value of define "__AVX__" : 1 00:03:52.548 Fetching value of define "__AVX2__" : 1 00:03:52.548 Fetching value of define "__AVX512BW__" : 1 00:03:52.548 Fetching value of define "__AVX512CD__" : 1 00:03:52.548 Fetching value of define "__AVX512DQ__" : 1 00:03:52.548 Fetching value of define "__AVX512F__" : 1 00:03:52.548 Fetching value of define "__AVX512VL__" : 1 00:03:52.548 Fetching value of define "__PCLMUL__" : 1 00:03:52.548 Fetching value of define "__RDRND__" : 1 00:03:52.548 Fetching value of define "__RDSEED__" : 1 00:03:52.548 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:52.548 Fetching value of define "__znver1__" : (undefined) 00:03:52.548 Fetching value of define "__znver2__" : (undefined) 00:03:52.548 Fetching value of define "__znver3__" : (undefined) 00:03:52.548 Fetching value of define "__znver4__" : (undefined) 00:03:52.548 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:52.548 Message: lib/log: Defining dependency "log" 00:03:52.548 Message: lib/kvargs: Defining dependency "kvargs" 00:03:52.548 Message: lib/telemetry: Defining dependency "telemetry" 00:03:52.548 Checking for function "getentropy" : NO 00:03:52.548 Message: lib/eal: Defining dependency "eal" 00:03:52.548 Message: lib/ring: Defining dependency "ring" 00:03:52.548 Message: lib/rcu: Defining dependency "rcu" 00:03:52.548 Message: lib/mempool: Defining dependency "mempool" 00:03:52.548 Message: lib/mbuf: Defining dependency "mbuf" 00:03:52.548 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:52.548 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:52.548 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:52.548 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:52.548 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:52.548 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:52.548 Compiler for C supports arguments -mpclmul: YES 00:03:52.548 Compiler for C supports arguments -maes: YES 00:03:52.548 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:52.548 Compiler for C supports arguments -mavx512bw: YES 00:03:52.548 Compiler for C supports arguments -mavx512dq: YES 00:03:52.548 Compiler for C supports arguments -mavx512vl: YES 00:03:52.548 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:52.548 Compiler for C supports arguments -mavx2: YES 00:03:52.548 Compiler for C supports arguments -mavx: YES 00:03:52.548 Message: lib/net: Defining dependency "net" 00:03:52.548 Message: lib/meter: Defining dependency "meter" 00:03:52.548 Message: lib/ethdev: Defining dependency "ethdev" 00:03:52.548 Message: lib/pci: Defining dependency "pci" 00:03:52.548 Message: lib/cmdline: Defining dependency "cmdline" 00:03:52.548 Message: lib/hash: Defining dependency "hash" 00:03:52.548 Message: lib/timer: Defining dependency "timer" 00:03:52.548 Message: lib/compressdev: Defining dependency "compressdev" 00:03:52.548 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:52.548 Message: lib/dmadev: Defining dependency "dmadev" 00:03:52.548 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:52.548 Message: lib/power: Defining dependency "power" 00:03:52.548 Message: lib/reorder: Defining dependency "reorder" 00:03:52.548 Message: lib/security: Defining dependency "security" 00:03:52.548 Has header "linux/userfaultfd.h" : YES 00:03:52.548 Has header "linux/vduse.h" : YES 00:03:52.548 Message: lib/vhost: Defining dependency "vhost" 00:03:52.548 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:52.548 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:52.548 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:52.548 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:52.548 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:52.548 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:52.548 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:52.548 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:52.548 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:52.548 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:52.548 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:52.548 Configuring doxy-api-html.conf using configuration 00:03:52.548 Configuring doxy-api-man.conf using configuration 00:03:52.548 Program mandb found: YES (/usr/bin/mandb) 00:03:52.548 Program sphinx-build found: NO 00:03:52.548 Configuring rte_build_config.h using configuration 00:03:52.548 Message: 00:03:52.548 ================= 00:03:52.548 Applications Enabled 00:03:52.548 ================= 00:03:52.548 00:03:52.548 apps: 00:03:52.548 00:03:52.548 00:03:52.548 Message: 00:03:52.548 ================= 00:03:52.548 Libraries Enabled 00:03:52.548 ================= 00:03:52.548 00:03:52.548 libs: 00:03:52.548 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:52.548 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:52.548 cryptodev, dmadev, power, reorder, security, vhost, 00:03:52.548 00:03:52.548 Message: 00:03:52.548 =============== 00:03:52.548 Drivers Enabled 00:03:52.548 =============== 00:03:52.548 00:03:52.548 common: 00:03:52.548 00:03:52.548 bus: 00:03:52.548 pci, vdev, 00:03:52.548 mempool: 00:03:52.548 ring, 00:03:52.548 dma: 00:03:52.548 00:03:52.548 net: 00:03:52.548 00:03:52.548 crypto: 00:03:52.548 00:03:52.548 compress: 00:03:52.548 00:03:52.548 vdpa: 00:03:52.548 00:03:52.548 00:03:52.548 Message: 00:03:52.548 ================= 00:03:52.548 Content Skipped 00:03:52.548 ================= 00:03:52.548 00:03:52.548 apps: 00:03:52.548 dumpcap: explicitly disabled via build config 00:03:52.548 graph: explicitly disabled via build config 00:03:52.548 pdump: explicitly disabled via build config 00:03:52.548 proc-info: explicitly disabled via build config 00:03:52.548 test-acl: explicitly disabled via build config 00:03:52.548 test-bbdev: explicitly disabled via build config 00:03:52.548 test-cmdline: explicitly disabled via build config 00:03:52.548 test-compress-perf: explicitly disabled via build config 00:03:52.548 test-crypto-perf: explicitly disabled via build config 00:03:52.548 test-dma-perf: explicitly disabled via build config 00:03:52.548 test-eventdev: explicitly disabled via build config 00:03:52.548 test-fib: explicitly disabled via build config 00:03:52.548 test-flow-perf: explicitly disabled via build config 00:03:52.548 test-gpudev: explicitly disabled via build config 00:03:52.548 test-mldev: explicitly disabled via build config 00:03:52.548 test-pipeline: explicitly disabled via build config 00:03:52.548 test-pmd: explicitly disabled via build config 00:03:52.548 test-regex: explicitly disabled via build config 00:03:52.548 test-sad: explicitly disabled via build config 00:03:52.548 test-security-perf: explicitly disabled via build config 00:03:52.548 00:03:52.548 libs: 00:03:52.548 argparse: explicitly disabled via build config 00:03:52.548 metrics: explicitly disabled via build config 00:03:52.548 acl: explicitly disabled via build config 00:03:52.548 bbdev: explicitly disabled via build config 00:03:52.548 bitratestats: explicitly disabled via build config 00:03:52.548 bpf: explicitly disabled via build config 00:03:52.548 cfgfile: explicitly disabled via build config 00:03:52.548 distributor: explicitly disabled via build config 00:03:52.548 efd: explicitly disabled via build config 00:03:52.548 eventdev: explicitly disabled via build config 00:03:52.548 dispatcher: explicitly disabled via build config 00:03:52.548 gpudev: explicitly disabled via build config 00:03:52.548 gro: explicitly disabled via build config 00:03:52.548 gso: explicitly disabled via build config 00:03:52.548 ip_frag: explicitly disabled via build config 00:03:52.548 jobstats: explicitly disabled via build config 00:03:52.548 latencystats: explicitly disabled via build config 00:03:52.548 lpm: explicitly disabled via build config 00:03:52.548 member: explicitly disabled via build config 00:03:52.548 pcapng: explicitly disabled via build config 00:03:52.548 rawdev: explicitly disabled via build config 00:03:52.548 regexdev: explicitly disabled via build config 00:03:52.548 mldev: explicitly disabled via build config 00:03:52.548 rib: explicitly disabled via build config 00:03:52.548 sched: explicitly disabled via build config 00:03:52.548 stack: explicitly disabled via build config 00:03:52.548 ipsec: explicitly disabled via build config 00:03:52.548 pdcp: explicitly disabled via build config 00:03:52.548 fib: explicitly disabled via build config 00:03:52.548 port: explicitly disabled via build config 00:03:52.548 pdump: explicitly disabled via build config 00:03:52.548 table: explicitly disabled via build config 00:03:52.548 pipeline: explicitly disabled via build config 00:03:52.548 graph: explicitly disabled via build config 00:03:52.548 node: explicitly disabled via build config 00:03:52.548 00:03:52.548 drivers: 00:03:52.548 common/cpt: not in enabled drivers build config 00:03:52.548 common/dpaax: not in enabled drivers build config 00:03:52.548 common/iavf: not in enabled drivers build config 00:03:52.548 common/idpf: not in enabled drivers build config 00:03:52.548 common/ionic: not in enabled drivers build config 00:03:52.548 common/mvep: not in enabled drivers build config 00:03:52.548 common/octeontx: not in enabled drivers build config 00:03:52.548 bus/auxiliary: not in enabled drivers build config 00:03:52.548 bus/cdx: not in enabled drivers build config 00:03:52.548 bus/dpaa: not in enabled drivers build config 00:03:52.548 bus/fslmc: not in enabled drivers build config 00:03:52.548 bus/ifpga: not in enabled drivers build config 00:03:52.548 bus/platform: not in enabled drivers build config 00:03:52.548 bus/uacce: not in enabled drivers build config 00:03:52.548 bus/vmbus: not in enabled drivers build config 00:03:52.549 common/cnxk: not in enabled drivers build config 00:03:52.549 common/mlx5: not in enabled drivers build config 00:03:52.549 common/nfp: not in enabled drivers build config 00:03:52.549 common/nitrox: not in enabled drivers build config 00:03:52.549 common/qat: not in enabled drivers build config 00:03:52.549 common/sfc_efx: not in enabled drivers build config 00:03:52.549 mempool/bucket: not in enabled drivers build config 00:03:52.549 mempool/cnxk: not in enabled drivers build config 00:03:52.549 mempool/dpaa: not in enabled drivers build config 00:03:52.549 mempool/dpaa2: not in enabled drivers build config 00:03:52.549 mempool/octeontx: not in enabled drivers build config 00:03:52.549 mempool/stack: not in enabled drivers build config 00:03:52.549 dma/cnxk: not in enabled drivers build config 00:03:52.549 dma/dpaa: not in enabled drivers build config 00:03:52.549 dma/dpaa2: not in enabled drivers build config 00:03:52.549 dma/hisilicon: not in enabled drivers build config 00:03:52.549 dma/idxd: not in enabled drivers build config 00:03:52.549 dma/ioat: not in enabled drivers build config 00:03:52.549 dma/skeleton: not in enabled drivers build config 00:03:52.549 net/af_packet: not in enabled drivers build config 00:03:52.549 net/af_xdp: not in enabled drivers build config 00:03:52.549 net/ark: not in enabled drivers build config 00:03:52.549 net/atlantic: not in enabled drivers build config 00:03:52.549 net/avp: not in enabled drivers build config 00:03:52.549 net/axgbe: not in enabled drivers build config 00:03:52.549 net/bnx2x: not in enabled drivers build config 00:03:52.549 net/bnxt: not in enabled drivers build config 00:03:52.549 net/bonding: not in enabled drivers build config 00:03:52.549 net/cnxk: not in enabled drivers build config 00:03:52.549 net/cpfl: not in enabled drivers build config 00:03:52.549 net/cxgbe: not in enabled drivers build config 00:03:52.549 net/dpaa: not in enabled drivers build config 00:03:52.549 net/dpaa2: not in enabled drivers build config 00:03:52.549 net/e1000: not in enabled drivers build config 00:03:52.549 net/ena: not in enabled drivers build config 00:03:52.549 net/enetc: not in enabled drivers build config 00:03:52.549 net/enetfec: not in enabled drivers build config 00:03:52.549 net/enic: not in enabled drivers build config 00:03:52.549 net/failsafe: not in enabled drivers build config 00:03:52.549 net/fm10k: not in enabled drivers build config 00:03:52.549 net/gve: not in enabled drivers build config 00:03:52.549 net/hinic: not in enabled drivers build config 00:03:52.549 net/hns3: not in enabled drivers build config 00:03:52.549 net/i40e: not in enabled drivers build config 00:03:52.549 net/iavf: not in enabled drivers build config 00:03:52.549 net/ice: not in enabled drivers build config 00:03:52.549 net/idpf: not in enabled drivers build config 00:03:52.549 net/igc: not in enabled drivers build config 00:03:52.549 net/ionic: not in enabled drivers build config 00:03:52.549 net/ipn3ke: not in enabled drivers build config 00:03:52.549 net/ixgbe: not in enabled drivers build config 00:03:52.549 net/mana: not in enabled drivers build config 00:03:52.549 net/memif: not in enabled drivers build config 00:03:52.549 net/mlx4: not in enabled drivers build config 00:03:52.549 net/mlx5: not in enabled drivers build config 00:03:52.549 net/mvneta: not in enabled drivers build config 00:03:52.549 net/mvpp2: not in enabled drivers build config 00:03:52.549 net/netvsc: not in enabled drivers build config 00:03:52.549 net/nfb: not in enabled drivers build config 00:03:52.549 net/nfp: not in enabled drivers build config 00:03:52.549 net/ngbe: not in enabled drivers build config 00:03:52.549 net/null: not in enabled drivers build config 00:03:52.549 net/octeontx: not in enabled drivers build config 00:03:52.549 net/octeon_ep: not in enabled drivers build config 00:03:52.549 net/pcap: not in enabled drivers build config 00:03:52.549 net/pfe: not in enabled drivers build config 00:03:52.549 net/qede: not in enabled drivers build config 00:03:52.549 net/ring: not in enabled drivers build config 00:03:52.549 net/sfc: not in enabled drivers build config 00:03:52.549 net/softnic: not in enabled drivers build config 00:03:52.549 net/tap: not in enabled drivers build config 00:03:52.549 net/thunderx: not in enabled drivers build config 00:03:52.549 net/txgbe: not in enabled drivers build config 00:03:52.549 net/vdev_netvsc: not in enabled drivers build config 00:03:52.549 net/vhost: not in enabled drivers build config 00:03:52.549 net/virtio: not in enabled drivers build config 00:03:52.549 net/vmxnet3: not in enabled drivers build config 00:03:52.549 raw/*: missing internal dependency, "rawdev" 00:03:52.549 crypto/armv8: not in enabled drivers build config 00:03:52.549 crypto/bcmfs: not in enabled drivers build config 00:03:52.549 crypto/caam_jr: not in enabled drivers build config 00:03:52.549 crypto/ccp: not in enabled drivers build config 00:03:52.549 crypto/cnxk: not in enabled drivers build config 00:03:52.549 crypto/dpaa_sec: not in enabled drivers build config 00:03:52.549 crypto/dpaa2_sec: not in enabled drivers build config 00:03:52.549 crypto/ipsec_mb: not in enabled drivers build config 00:03:52.549 crypto/mlx5: not in enabled drivers build config 00:03:52.549 crypto/mvsam: not in enabled drivers build config 00:03:52.549 crypto/nitrox: not in enabled drivers build config 00:03:52.549 crypto/null: not in enabled drivers build config 00:03:52.549 crypto/octeontx: not in enabled drivers build config 00:03:52.549 crypto/openssl: not in enabled drivers build config 00:03:52.549 crypto/scheduler: not in enabled drivers build config 00:03:52.549 crypto/uadk: not in enabled drivers build config 00:03:52.549 crypto/virtio: not in enabled drivers build config 00:03:52.549 compress/isal: not in enabled drivers build config 00:03:52.549 compress/mlx5: not in enabled drivers build config 00:03:52.549 compress/nitrox: not in enabled drivers build config 00:03:52.549 compress/octeontx: not in enabled drivers build config 00:03:52.549 compress/zlib: not in enabled drivers build config 00:03:52.549 regex/*: missing internal dependency, "regexdev" 00:03:52.549 ml/*: missing internal dependency, "mldev" 00:03:52.549 vdpa/ifc: not in enabled drivers build config 00:03:52.549 vdpa/mlx5: not in enabled drivers build config 00:03:52.549 vdpa/nfp: not in enabled drivers build config 00:03:52.549 vdpa/sfc: not in enabled drivers build config 00:03:52.549 event/*: missing internal dependency, "eventdev" 00:03:52.549 baseband/*: missing internal dependency, "bbdev" 00:03:52.549 gpu/*: missing internal dependency, "gpudev" 00:03:52.549 00:03:52.549 00:03:52.549 Build targets in project: 85 00:03:52.549 00:03:52.549 DPDK 24.03.0 00:03:52.549 00:03:52.549 User defined options 00:03:52.549 buildtype : debug 00:03:52.549 default_library : shared 00:03:52.549 libdir : lib 00:03:52.549 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:52.549 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:52.549 c_link_args : 00:03:52.549 cpu_instruction_set: native 00:03:52.549 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:52.549 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:52.549 enable_docs : false 00:03:52.549 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:52.549 enable_kmods : false 00:03:52.549 max_lcores : 128 00:03:52.549 tests : false 00:03:52.549 00:03:52.549 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:53.123 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:53.123 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:53.123 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:53.123 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:53.123 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:53.123 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:53.123 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:53.123 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:53.123 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:53.123 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:53.123 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:53.123 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:53.123 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:53.123 [13/268] Linking static target lib/librte_kvargs.a 00:03:53.123 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:53.123 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:53.384 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:53.384 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:53.384 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:53.384 [19/268] Linking static target lib/librte_log.a 00:03:53.384 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:53.384 [21/268] Linking static target lib/librte_pci.a 00:03:53.384 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:53.384 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:53.384 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:53.645 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:53.645 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:53.645 [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:53.645 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:53.645 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:53.645 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:53.645 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:53.645 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:53.645 [33/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:53.645 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:53.645 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:53.645 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:53.645 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:53.645 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:53.645 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:53.645 [40/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:53.645 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:53.645 [42/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:53.645 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:53.645 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:53.645 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:53.645 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:53.645 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:53.645 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:53.645 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:53.645 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:53.645 [51/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:53.645 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:53.645 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:53.645 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:53.645 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:53.645 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:53.645 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:53.645 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:53.645 [59/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:53.645 [60/268] Linking static target lib/librte_meter.a 00:03:53.645 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:53.645 [62/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:53.645 [63/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:53.645 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:53.645 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:53.645 [66/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:53.645 [67/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:53.645 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:53.645 [69/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:53.645 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:53.645 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:53.645 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:53.645 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:53.645 [74/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:53.646 [75/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:53.646 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:53.646 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:53.646 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:53.646 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:53.646 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:53.646 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:53.646 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:53.646 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:53.646 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:53.646 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:53.646 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:53.646 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:53.646 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:53.646 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:53.646 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:53.906 [91/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.906 [92/268] Linking static target lib/librte_ring.a 00:03:53.906 [93/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:53.906 [94/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:53.906 [95/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:53.906 [96/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:53.906 [97/268] Linking static target lib/librte_telemetry.a 00:03:53.906 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:53.906 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:53.906 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:53.906 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:53.906 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:53.906 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:53.906 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:53.906 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:53.906 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:53.906 [107/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:53.906 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:53.906 [109/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:53.906 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:53.906 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:53.906 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:53.906 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:53.906 [114/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.906 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:53.906 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:53.906 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:53.906 [118/268] Linking static target lib/librte_rcu.a 00:03:53.906 [119/268] Linking static target lib/librte_mempool.a 00:03:53.906 [120/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:53.906 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:53.906 [122/268] Linking static target lib/librte_net.a 00:03:53.906 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:53.906 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:53.907 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:53.907 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:53.907 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:53.907 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:53.907 [129/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.907 [130/268] Linking static target lib/librte_eal.a 00:03:53.907 [131/268] Linking static target lib/librte_cmdline.a 00:03:53.907 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:53.907 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:53.907 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:53.907 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:53.907 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:53.907 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.165 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:54.165 [139/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:54.165 [140/268] Linking target lib/librte_log.so.24.1 00:03:54.165 [141/268] Linking static target lib/librte_mbuf.a 00:03:54.165 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.165 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:54.165 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:54.165 [145/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:54.165 [146/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:54.165 [147/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:54.165 [148/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:54.165 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:54.165 [150/268] Linking static target lib/librte_timer.a 00:03:54.165 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:54.165 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:54.165 [153/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.165 [154/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.165 [155/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:54.165 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:54.165 [157/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:54.165 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:54.165 [159/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:54.165 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:54.165 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:54.165 [162/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:54.165 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:54.165 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:54.166 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:54.166 [166/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.166 [167/268] Linking target lib/librte_kvargs.so.24.1 00:03:54.166 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:54.166 [169/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:54.166 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:54.166 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:54.166 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:54.166 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:54.166 [174/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:54.166 [175/268] Linking target lib/librte_telemetry.so.24.1 00:03:54.166 [176/268] Linking static target lib/librte_power.a 00:03:54.166 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:54.166 [178/268] Linking static target lib/librte_reorder.a 00:03:54.166 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:54.166 [180/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:54.166 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:54.166 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:54.166 [183/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:54.166 [184/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:54.166 [185/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:54.425 [186/268] Linking static target lib/librte_compressdev.a 00:03:54.425 [187/268] Linking static target lib/librte_dmadev.a 00:03:54.425 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:54.425 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:54.425 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:54.425 [191/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:54.425 [192/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:54.425 [193/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:54.425 [194/268] Linking static target drivers/librte_bus_vdev.a 00:03:54.425 [195/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:54.425 [196/268] Linking static target lib/librte_security.a 00:03:54.425 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:54.425 [198/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:54.425 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:54.425 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:54.425 [201/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:54.425 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:54.425 [203/268] Linking static target drivers/librte_mempool_ring.a 00:03:54.425 [204/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:54.425 [205/268] Linking static target lib/librte_hash.a 00:03:54.425 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:54.425 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:54.425 [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.684 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:54.684 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:54.684 [211/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.684 [212/268] Linking static target drivers/librte_bus_pci.a 00:03:54.684 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:54.684 [214/268] Linking static target lib/librte_cryptodev.a 00:03:54.684 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.684 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.684 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.684 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:54.684 [219/268] Linking static target lib/librte_ethdev.a 00:03:54.943 [220/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.943 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.943 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.943 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.943 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:55.202 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.202 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.461 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.029 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:56.029 [229/268] Linking static target lib/librte_vhost.a 00:03:56.596 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.975 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.250 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.510 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.770 [234/268] Linking target lib/librte_eal.so.24.1 00:04:03.770 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:03.770 [236/268] Linking target lib/librte_ring.so.24.1 00:04:03.770 [237/268] Linking target lib/librte_dmadev.so.24.1 00:04:03.770 [238/268] Linking target lib/librte_timer.so.24.1 00:04:03.770 [239/268] Linking target lib/librte_meter.so.24.1 00:04:03.770 [240/268] Linking target lib/librte_pci.so.24.1 00:04:03.770 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:04.044 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:04.044 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:04.044 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:04.044 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:04.044 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:04.044 [247/268] Linking target lib/librte_mempool.so.24.1 00:04:04.044 [248/268] Linking target lib/librte_rcu.so.24.1 00:04:04.044 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:04.044 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:04.044 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:04.304 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:04.304 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:04.304 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:04.304 [255/268] Linking target lib/librte_net.so.24.1 00:04:04.304 [256/268] Linking target lib/librte_reorder.so.24.1 00:04:04.304 [257/268] Linking target lib/librte_compressdev.so.24.1 00:04:04.304 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:04.564 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:04.564 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:04.564 [261/268] Linking target lib/librte_hash.so.24.1 00:04:04.564 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:04.564 [263/268] Linking target lib/librte_security.so.24.1 00:04:04.564 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:04.564 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:04.823 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:04.823 [267/268] Linking target lib/librte_power.so.24.1 00:04:04.823 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:04.823 INFO: autodetecting backend as ninja 00:04:04.823 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:04:17.035 CC lib/ut_mock/mock.o 00:04:17.035 CC lib/log/log.o 00:04:17.035 CC lib/log/log_flags.o 00:04:17.035 CC lib/log/log_deprecated.o 00:04:17.035 CC lib/ut/ut.o 00:04:17.035 LIB libspdk_ut_mock.a 00:04:17.035 LIB libspdk_log.a 00:04:17.035 LIB libspdk_ut.a 00:04:17.035 SO libspdk_ut_mock.so.6.0 00:04:17.035 SO libspdk_log.so.7.1 00:04:17.035 SO libspdk_ut.so.2.0 00:04:17.035 SYMLINK libspdk_ut_mock.so 00:04:17.035 SYMLINK libspdk_log.so 00:04:17.035 SYMLINK libspdk_ut.so 00:04:17.035 CC lib/ioat/ioat.o 00:04:17.035 CXX lib/trace_parser/trace.o 00:04:17.035 CC lib/util/base64.o 00:04:17.035 CC lib/util/bit_array.o 00:04:17.035 CC lib/util/cpuset.o 00:04:17.035 CC lib/util/crc32.o 00:04:17.035 CC lib/util/crc16.o 00:04:17.035 CC lib/util/crc32c.o 00:04:17.035 CC lib/util/crc32_ieee.o 00:04:17.035 CC lib/dma/dma.o 00:04:17.035 CC lib/util/crc64.o 00:04:17.035 CC lib/util/dif.o 00:04:17.035 CC lib/util/fd.o 00:04:17.035 CC lib/util/fd_group.o 00:04:17.035 CC lib/util/file.o 00:04:17.035 CC lib/util/hexlify.o 00:04:17.035 CC lib/util/iov.o 00:04:17.035 CC lib/util/math.o 00:04:17.035 CC lib/util/net.o 00:04:17.035 CC lib/util/pipe.o 00:04:17.035 CC lib/util/strerror_tls.o 00:04:17.035 CC lib/util/string.o 00:04:17.035 CC lib/util/uuid.o 00:04:17.035 CC lib/util/xor.o 00:04:17.035 CC lib/util/zipf.o 00:04:17.035 CC lib/util/md5.o 00:04:17.035 CC lib/vfio_user/host/vfio_user_pci.o 00:04:17.035 CC lib/vfio_user/host/vfio_user.o 00:04:17.035 LIB libspdk_dma.a 00:04:17.035 LIB libspdk_ioat.a 00:04:17.035 SO libspdk_dma.so.5.0 00:04:17.035 SO libspdk_ioat.so.7.0 00:04:17.035 SYMLINK libspdk_dma.so 00:04:17.035 SYMLINK libspdk_ioat.so 00:04:17.035 LIB libspdk_vfio_user.a 00:04:17.035 SO libspdk_vfio_user.so.5.0 00:04:17.035 SYMLINK libspdk_vfio_user.so 00:04:17.035 LIB libspdk_util.a 00:04:17.035 SO libspdk_util.so.10.1 00:04:17.035 SYMLINK libspdk_util.so 00:04:17.035 LIB libspdk_trace_parser.a 00:04:17.035 SO libspdk_trace_parser.so.6.0 00:04:17.035 SYMLINK libspdk_trace_parser.so 00:04:17.035 CC lib/conf/conf.o 00:04:17.035 CC lib/env_dpdk/env.o 00:04:17.035 CC lib/env_dpdk/init.o 00:04:17.035 CC lib/env_dpdk/memory.o 00:04:17.035 CC lib/env_dpdk/pci.o 00:04:17.035 CC lib/env_dpdk/pci_ioat.o 00:04:17.035 CC lib/env_dpdk/threads.o 00:04:17.035 CC lib/env_dpdk/pci_virtio.o 00:04:17.035 CC lib/env_dpdk/pci_vmd.o 00:04:17.035 CC lib/vmd/vmd.o 00:04:17.035 CC lib/env_dpdk/pci_idxd.o 00:04:17.035 CC lib/env_dpdk/pci_dpdk.o 00:04:17.035 CC lib/env_dpdk/pci_event.o 00:04:17.035 CC lib/vmd/led.o 00:04:17.035 CC lib/env_dpdk/sigbus_handler.o 00:04:17.035 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:17.035 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:17.035 CC lib/idxd/idxd.o 00:04:17.035 CC lib/idxd/idxd_user.o 00:04:17.035 CC lib/idxd/idxd_kernel.o 00:04:17.035 CC lib/json/json_parse.o 00:04:17.035 CC lib/json/json_util.o 00:04:17.035 CC lib/rdma_utils/rdma_utils.o 00:04:17.035 CC lib/json/json_write.o 00:04:17.035 LIB libspdk_conf.a 00:04:17.035 SO libspdk_conf.so.6.0 00:04:17.035 LIB libspdk_rdma_utils.a 00:04:17.035 SYMLINK libspdk_conf.so 00:04:17.035 LIB libspdk_json.a 00:04:17.035 SO libspdk_rdma_utils.so.1.0 00:04:17.035 SO libspdk_json.so.6.0 00:04:17.035 SYMLINK libspdk_rdma_utils.so 00:04:17.293 SYMLINK libspdk_json.so 00:04:17.293 LIB libspdk_idxd.a 00:04:17.293 LIB libspdk_vmd.a 00:04:17.293 SO libspdk_idxd.so.12.1 00:04:17.293 SO libspdk_vmd.so.6.0 00:04:17.293 SYMLINK libspdk_idxd.so 00:04:17.552 SYMLINK libspdk_vmd.so 00:04:17.552 CC lib/rdma_provider/common.o 00:04:17.552 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:17.552 CC lib/jsonrpc/jsonrpc_server.o 00:04:17.552 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:17.552 CC lib/jsonrpc/jsonrpc_client.o 00:04:17.552 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:17.552 LIB libspdk_rdma_provider.a 00:04:17.811 SO libspdk_rdma_provider.so.7.0 00:04:17.811 LIB libspdk_jsonrpc.a 00:04:17.811 SYMLINK libspdk_rdma_provider.so 00:04:17.811 SO libspdk_jsonrpc.so.6.0 00:04:17.811 SYMLINK libspdk_jsonrpc.so 00:04:17.811 LIB libspdk_env_dpdk.a 00:04:17.811 SO libspdk_env_dpdk.so.15.1 00:04:18.069 SYMLINK libspdk_env_dpdk.so 00:04:18.069 CC lib/rpc/rpc.o 00:04:18.326 LIB libspdk_rpc.a 00:04:18.326 SO libspdk_rpc.so.6.0 00:04:18.326 SYMLINK libspdk_rpc.so 00:04:18.584 CC lib/trace/trace.o 00:04:18.584 CC lib/trace/trace_flags.o 00:04:18.584 CC lib/trace/trace_rpc.o 00:04:18.584 CC lib/keyring/keyring.o 00:04:18.584 CC lib/keyring/keyring_rpc.o 00:04:18.584 CC lib/notify/notify.o 00:04:18.584 CC lib/notify/notify_rpc.o 00:04:18.843 LIB libspdk_notify.a 00:04:18.843 LIB libspdk_keyring.a 00:04:18.843 LIB libspdk_trace.a 00:04:18.843 SO libspdk_notify.so.6.0 00:04:18.843 SO libspdk_keyring.so.2.0 00:04:18.843 SO libspdk_trace.so.11.0 00:04:18.843 SYMLINK libspdk_keyring.so 00:04:18.843 SYMLINK libspdk_notify.so 00:04:18.843 SYMLINK libspdk_trace.so 00:04:19.410 CC lib/sock/sock.o 00:04:19.410 CC lib/sock/sock_rpc.o 00:04:19.410 CC lib/thread/thread.o 00:04:19.410 CC lib/thread/iobuf.o 00:04:19.668 LIB libspdk_sock.a 00:04:19.668 SO libspdk_sock.so.10.0 00:04:19.668 SYMLINK libspdk_sock.so 00:04:19.927 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:19.927 CC lib/nvme/nvme_ctrlr.o 00:04:19.927 CC lib/nvme/nvme_ns_cmd.o 00:04:19.927 CC lib/nvme/nvme_fabric.o 00:04:19.927 CC lib/nvme/nvme_ns.o 00:04:19.927 CC lib/nvme/nvme_pcie_common.o 00:04:19.927 CC lib/nvme/nvme_qpair.o 00:04:19.927 CC lib/nvme/nvme_pcie.o 00:04:19.927 CC lib/nvme/nvme_quirks.o 00:04:19.927 CC lib/nvme/nvme.o 00:04:19.927 CC lib/nvme/nvme_discovery.o 00:04:19.927 CC lib/nvme/nvme_transport.o 00:04:19.927 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:19.927 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:19.927 CC lib/nvme/nvme_tcp.o 00:04:19.927 CC lib/nvme/nvme_opal.o 00:04:19.927 CC lib/nvme/nvme_io_msg.o 00:04:19.927 CC lib/nvme/nvme_poll_group.o 00:04:19.927 CC lib/nvme/nvme_zns.o 00:04:19.927 CC lib/nvme/nvme_stubs.o 00:04:19.927 CC lib/nvme/nvme_auth.o 00:04:19.927 CC lib/nvme/nvme_cuse.o 00:04:19.927 CC lib/nvme/nvme_vfio_user.o 00:04:19.927 CC lib/nvme/nvme_rdma.o 00:04:20.495 LIB libspdk_thread.a 00:04:20.495 SO libspdk_thread.so.11.0 00:04:20.495 SYMLINK libspdk_thread.so 00:04:20.754 CC lib/virtio/virtio.o 00:04:20.754 CC lib/virtio/virtio_vhost_user.o 00:04:20.754 CC lib/blob/zeroes.o 00:04:20.754 CC lib/virtio/virtio_pci.o 00:04:20.754 CC lib/blob/request.o 00:04:20.754 CC lib/blob/blobstore.o 00:04:20.754 CC lib/virtio/virtio_vfio_user.o 00:04:20.754 CC lib/blob/blob_bs_dev.o 00:04:20.754 CC lib/init/json_config.o 00:04:20.754 CC lib/init/subsystem.o 00:04:20.754 CC lib/init/subsystem_rpc.o 00:04:20.754 CC lib/init/rpc.o 00:04:20.754 CC lib/accel/accel.o 00:04:20.754 CC lib/accel/accel_rpc.o 00:04:20.754 CC lib/fsdev/fsdev.o 00:04:20.754 CC lib/accel/accel_sw.o 00:04:20.754 CC lib/fsdev/fsdev_io.o 00:04:20.754 CC lib/fsdev/fsdev_rpc.o 00:04:20.754 CC lib/vfu_tgt/tgt_endpoint.o 00:04:20.754 CC lib/vfu_tgt/tgt_rpc.o 00:04:21.013 LIB libspdk_init.a 00:04:21.013 SO libspdk_init.so.6.0 00:04:21.013 LIB libspdk_virtio.a 00:04:21.013 LIB libspdk_vfu_tgt.a 00:04:21.013 SYMLINK libspdk_init.so 00:04:21.013 SO libspdk_virtio.so.7.0 00:04:21.013 SO libspdk_vfu_tgt.so.3.0 00:04:21.013 SYMLINK libspdk_vfu_tgt.so 00:04:21.013 SYMLINK libspdk_virtio.so 00:04:21.277 LIB libspdk_fsdev.a 00:04:21.277 SO libspdk_fsdev.so.2.0 00:04:21.277 CC lib/event/app.o 00:04:21.277 CC lib/event/reactor.o 00:04:21.277 CC lib/event/log_rpc.o 00:04:21.277 CC lib/event/app_rpc.o 00:04:21.277 CC lib/event/scheduler_static.o 00:04:21.277 SYMLINK libspdk_fsdev.so 00:04:21.535 LIB libspdk_accel.a 00:04:21.535 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:21.535 SO libspdk_accel.so.16.0 00:04:21.535 LIB libspdk_nvme.a 00:04:21.535 LIB libspdk_event.a 00:04:21.795 SYMLINK libspdk_accel.so 00:04:21.795 SO libspdk_event.so.14.0 00:04:21.795 SO libspdk_nvme.so.15.0 00:04:21.795 SYMLINK libspdk_event.so 00:04:21.795 SYMLINK libspdk_nvme.so 00:04:22.054 CC lib/bdev/bdev.o 00:04:22.054 CC lib/bdev/bdev_rpc.o 00:04:22.054 CC lib/bdev/bdev_zone.o 00:04:22.054 CC lib/bdev/part.o 00:04:22.054 CC lib/bdev/scsi_nvme.o 00:04:22.054 LIB libspdk_fuse_dispatcher.a 00:04:22.054 SO libspdk_fuse_dispatcher.so.1.0 00:04:22.054 SYMLINK libspdk_fuse_dispatcher.so 00:04:22.992 LIB libspdk_blob.a 00:04:22.992 SO libspdk_blob.so.12.0 00:04:22.992 SYMLINK libspdk_blob.so 00:04:23.251 CC lib/lvol/lvol.o 00:04:23.251 CC lib/blobfs/blobfs.o 00:04:23.251 CC lib/blobfs/tree.o 00:04:23.819 LIB libspdk_bdev.a 00:04:23.819 SO libspdk_bdev.so.17.0 00:04:23.819 SYMLINK libspdk_bdev.so 00:04:23.819 LIB libspdk_blobfs.a 00:04:24.078 SO libspdk_blobfs.so.11.0 00:04:24.078 LIB libspdk_lvol.a 00:04:24.078 SO libspdk_lvol.so.11.0 00:04:24.078 SYMLINK libspdk_blobfs.so 00:04:24.078 SYMLINK libspdk_lvol.so 00:04:24.078 CC lib/scsi/lun.o 00:04:24.078 CC lib/scsi/dev.o 00:04:24.078 CC lib/scsi/port.o 00:04:24.078 CC lib/scsi/scsi.o 00:04:24.078 CC lib/scsi/scsi_bdev.o 00:04:24.078 CC lib/scsi/scsi_pr.o 00:04:24.078 CC lib/scsi/scsi_rpc.o 00:04:24.078 CC lib/scsi/task.o 00:04:24.078 CC lib/ftl/ftl_init.o 00:04:24.078 CC lib/ftl/ftl_core.o 00:04:24.078 CC lib/ftl/ftl_layout.o 00:04:24.078 CC lib/ftl/ftl_io.o 00:04:24.078 CC lib/ftl/ftl_debug.o 00:04:24.078 CC lib/ftl/ftl_sb.o 00:04:24.078 CC lib/ftl/ftl_l2p.o 00:04:24.078 CC lib/ftl/ftl_l2p_flat.o 00:04:24.336 CC lib/ftl/ftl_nv_cache.o 00:04:24.336 CC lib/ftl/ftl_band.o 00:04:24.336 CC lib/ftl/ftl_band_ops.o 00:04:24.336 CC lib/nbd/nbd.o 00:04:24.336 CC lib/ftl/ftl_writer.o 00:04:24.336 CC lib/nbd/nbd_rpc.o 00:04:24.336 CC lib/ftl/ftl_rq.o 00:04:24.336 CC lib/ftl/ftl_reloc.o 00:04:24.336 CC lib/ftl/ftl_l2p_cache.o 00:04:24.336 CC lib/ftl/ftl_p2l.o 00:04:24.336 CC lib/ublk/ublk.o 00:04:24.336 CC lib/ftl/ftl_p2l_log.o 00:04:24.336 CC lib/nvmf/ctrlr.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt.o 00:04:24.336 CC lib/ublk/ublk_rpc.o 00:04:24.336 CC lib/nvmf/ctrlr_discovery.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:24.336 CC lib/nvmf/subsystem.o 00:04:24.336 CC lib/nvmf/ctrlr_bdev.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:24.336 CC lib/nvmf/nvmf_rpc.o 00:04:24.336 CC lib/nvmf/transport.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:24.336 CC lib/nvmf/nvmf.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:24.336 CC lib/nvmf/stubs.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:24.336 CC lib/nvmf/mdns_server.o 00:04:24.336 CC lib/nvmf/tcp.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:24.336 CC lib/nvmf/vfio_user.o 00:04:24.336 CC lib/nvmf/rdma.o 00:04:24.336 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:24.336 CC lib/nvmf/auth.o 00:04:24.336 CC lib/ftl/utils/ftl_conf.o 00:04:24.336 CC lib/ftl/utils/ftl_md.o 00:04:24.336 CC lib/ftl/utils/ftl_bitmap.o 00:04:24.336 CC lib/ftl/utils/ftl_mempool.o 00:04:24.336 CC lib/ftl/utils/ftl_property.o 00:04:24.336 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:24.336 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:24.336 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:24.336 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:24.336 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:24.336 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:24.337 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:24.337 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:24.337 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:24.337 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:24.337 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:24.337 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:24.337 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:24.337 CC lib/ftl/base/ftl_base_dev.o 00:04:24.337 CC lib/ftl/base/ftl_base_bdev.o 00:04:24.337 CC lib/ftl/ftl_trace.o 00:04:24.906 LIB libspdk_nbd.a 00:04:24.906 SO libspdk_nbd.so.7.0 00:04:24.906 SYMLINK libspdk_nbd.so 00:04:24.906 LIB libspdk_scsi.a 00:04:24.906 LIB libspdk_ublk.a 00:04:24.906 SO libspdk_scsi.so.9.0 00:04:24.906 SO libspdk_ublk.so.3.0 00:04:25.165 SYMLINK libspdk_ublk.so 00:04:25.165 SYMLINK libspdk_scsi.so 00:04:25.165 LIB libspdk_ftl.a 00:04:25.424 CC lib/iscsi/conn.o 00:04:25.424 CC lib/iscsi/iscsi.o 00:04:25.424 CC lib/iscsi/param.o 00:04:25.424 CC lib/iscsi/init_grp.o 00:04:25.424 CC lib/iscsi/portal_grp.o 00:04:25.424 CC lib/iscsi/tgt_node.o 00:04:25.424 CC lib/iscsi/task.o 00:04:25.424 CC lib/iscsi/iscsi_subsystem.o 00:04:25.424 CC lib/iscsi/iscsi_rpc.o 00:04:25.424 CC lib/vhost/vhost_rpc.o 00:04:25.424 CC lib/vhost/vhost.o 00:04:25.424 CC lib/vhost/vhost_scsi.o 00:04:25.424 CC lib/vhost/vhost_blk.o 00:04:25.424 CC lib/vhost/rte_vhost_user.o 00:04:25.424 SO libspdk_ftl.so.9.0 00:04:25.684 SYMLINK libspdk_ftl.so 00:04:25.943 LIB libspdk_nvmf.a 00:04:25.943 SO libspdk_nvmf.so.20.0 00:04:26.203 LIB libspdk_vhost.a 00:04:26.203 SO libspdk_vhost.so.8.0 00:04:26.203 SYMLINK libspdk_nvmf.so 00:04:26.203 SYMLINK libspdk_vhost.so 00:04:26.203 LIB libspdk_iscsi.a 00:04:26.463 SO libspdk_iscsi.so.8.0 00:04:26.463 SYMLINK libspdk_iscsi.so 00:04:27.030 CC module/env_dpdk/env_dpdk_rpc.o 00:04:27.030 CC module/vfu_device/vfu_virtio.o 00:04:27.030 CC module/vfu_device/vfu_virtio_scsi.o 00:04:27.031 CC module/vfu_device/vfu_virtio_blk.o 00:04:27.031 CC module/vfu_device/vfu_virtio_rpc.o 00:04:27.031 CC module/vfu_device/vfu_virtio_fs.o 00:04:27.031 CC module/keyring/file/keyring.o 00:04:27.031 CC module/keyring/file/keyring_rpc.o 00:04:27.031 CC module/fsdev/aio/fsdev_aio.o 00:04:27.031 LIB libspdk_env_dpdk_rpc.a 00:04:27.031 CC module/sock/posix/posix.o 00:04:27.031 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:27.031 CC module/fsdev/aio/linux_aio_mgr.o 00:04:27.031 CC module/blob/bdev/blob_bdev.o 00:04:27.031 CC module/keyring/linux/keyring.o 00:04:27.031 CC module/keyring/linux/keyring_rpc.o 00:04:27.031 CC module/accel/ioat/accel_ioat.o 00:04:27.031 CC module/accel/ioat/accel_ioat_rpc.o 00:04:27.031 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:27.031 CC module/scheduler/gscheduler/gscheduler.o 00:04:27.031 CC module/accel/error/accel_error.o 00:04:27.031 CC module/accel/iaa/accel_iaa.o 00:04:27.031 CC module/accel/error/accel_error_rpc.o 00:04:27.031 CC module/accel/iaa/accel_iaa_rpc.o 00:04:27.031 CC module/accel/dsa/accel_dsa.o 00:04:27.031 CC module/accel/dsa/accel_dsa_rpc.o 00:04:27.031 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:27.031 SO libspdk_env_dpdk_rpc.so.6.0 00:04:27.290 SYMLINK libspdk_env_dpdk_rpc.so 00:04:27.290 LIB libspdk_keyring_file.a 00:04:27.290 LIB libspdk_scheduler_gscheduler.a 00:04:27.290 LIB libspdk_keyring_linux.a 00:04:27.290 SO libspdk_keyring_file.so.2.0 00:04:27.290 LIB libspdk_accel_ioat.a 00:04:27.290 LIB libspdk_scheduler_dpdk_governor.a 00:04:27.290 SO libspdk_scheduler_gscheduler.so.4.0 00:04:27.290 SO libspdk_keyring_linux.so.1.0 00:04:27.290 LIB libspdk_scheduler_dynamic.a 00:04:27.290 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:27.290 SO libspdk_accel_ioat.so.6.0 00:04:27.290 LIB libspdk_accel_error.a 00:04:27.290 SO libspdk_scheduler_dynamic.so.4.0 00:04:27.290 LIB libspdk_accel_iaa.a 00:04:27.290 SYMLINK libspdk_keyring_file.so 00:04:27.290 SYMLINK libspdk_scheduler_gscheduler.so 00:04:27.290 LIB libspdk_blob_bdev.a 00:04:27.290 SYMLINK libspdk_keyring_linux.so 00:04:27.290 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:27.290 SO libspdk_accel_error.so.2.0 00:04:27.290 SO libspdk_accel_iaa.so.3.0 00:04:27.290 SYMLINK libspdk_accel_ioat.so 00:04:27.290 SYMLINK libspdk_scheduler_dynamic.so 00:04:27.290 LIB libspdk_accel_dsa.a 00:04:27.290 SO libspdk_blob_bdev.so.12.0 00:04:27.550 SO libspdk_accel_dsa.so.5.0 00:04:27.550 SYMLINK libspdk_accel_error.so 00:04:27.550 SYMLINK libspdk_accel_iaa.so 00:04:27.550 SYMLINK libspdk_blob_bdev.so 00:04:27.550 SYMLINK libspdk_accel_dsa.so 00:04:27.550 LIB libspdk_vfu_device.a 00:04:27.550 SO libspdk_vfu_device.so.3.0 00:04:27.550 SYMLINK libspdk_vfu_device.so 00:04:27.550 LIB libspdk_fsdev_aio.a 00:04:27.550 SO libspdk_fsdev_aio.so.1.0 00:04:27.808 LIB libspdk_sock_posix.a 00:04:27.808 SYMLINK libspdk_fsdev_aio.so 00:04:27.808 SO libspdk_sock_posix.so.6.0 00:04:27.808 SYMLINK libspdk_sock_posix.so 00:04:27.808 CC module/bdev/lvol/vbdev_lvol.o 00:04:27.808 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:27.808 CC module/bdev/error/vbdev_error.o 00:04:27.808 CC module/bdev/error/vbdev_error_rpc.o 00:04:27.808 CC module/bdev/malloc/bdev_malloc.o 00:04:27.808 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:27.808 CC module/bdev/null/bdev_null_rpc.o 00:04:27.808 CC module/bdev/null/bdev_null.o 00:04:27.808 CC module/bdev/raid/bdev_raid.o 00:04:27.808 CC module/bdev/raid/raid1.o 00:04:27.808 CC module/bdev/raid/bdev_raid_rpc.o 00:04:27.808 CC module/bdev/raid/raid0.o 00:04:27.808 CC module/bdev/raid/bdev_raid_sb.o 00:04:27.808 CC module/bdev/raid/concat.o 00:04:27.808 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:27.808 CC module/blobfs/bdev/blobfs_bdev.o 00:04:27.808 CC module/bdev/split/vbdev_split.o 00:04:27.808 CC module/bdev/gpt/gpt.o 00:04:27.808 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:27.808 CC module/bdev/split/vbdev_split_rpc.o 00:04:27.808 CC module/bdev/nvme/bdev_nvme.o 00:04:27.808 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:27.808 CC module/bdev/delay/vbdev_delay.o 00:04:27.808 CC module/bdev/iscsi/bdev_iscsi.o 00:04:27.808 CC module/bdev/gpt/vbdev_gpt.o 00:04:27.808 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:27.808 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:27.808 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:27.808 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:27.808 CC module/bdev/nvme/bdev_mdns_client.o 00:04:27.808 CC module/bdev/nvme/nvme_rpc.o 00:04:27.808 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:27.808 CC module/bdev/nvme/vbdev_opal.o 00:04:27.808 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:27.808 CC module/bdev/passthru/vbdev_passthru.o 00:04:27.808 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:27.808 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:27.808 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:27.808 CC module/bdev/aio/bdev_aio.o 00:04:27.808 CC module/bdev/aio/bdev_aio_rpc.o 00:04:27.809 CC module/bdev/ftl/bdev_ftl.o 00:04:27.809 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:28.067 LIB libspdk_blobfs_bdev.a 00:04:28.067 LIB libspdk_bdev_error.a 00:04:28.067 SO libspdk_blobfs_bdev.so.6.0 00:04:28.067 LIB libspdk_bdev_null.a 00:04:28.326 SO libspdk_bdev_error.so.6.0 00:04:28.326 LIB libspdk_bdev_split.a 00:04:28.326 SO libspdk_bdev_null.so.6.0 00:04:28.326 LIB libspdk_bdev_passthru.a 00:04:28.326 LIB libspdk_bdev_ftl.a 00:04:28.326 SO libspdk_bdev_split.so.6.0 00:04:28.326 SYMLINK libspdk_blobfs_bdev.so 00:04:28.326 LIB libspdk_bdev_gpt.a 00:04:28.326 SYMLINK libspdk_bdev_error.so 00:04:28.326 SO libspdk_bdev_passthru.so.6.0 00:04:28.326 LIB libspdk_bdev_zone_block.a 00:04:28.326 LIB libspdk_bdev_malloc.a 00:04:28.326 SO libspdk_bdev_ftl.so.6.0 00:04:28.326 SO libspdk_bdev_gpt.so.6.0 00:04:28.326 SYMLINK libspdk_bdev_null.so 00:04:28.326 LIB libspdk_bdev_delay.a 00:04:28.326 LIB libspdk_bdev_iscsi.a 00:04:28.326 SYMLINK libspdk_bdev_split.so 00:04:28.326 LIB libspdk_bdev_aio.a 00:04:28.326 SO libspdk_bdev_zone_block.so.6.0 00:04:28.326 SO libspdk_bdev_malloc.so.6.0 00:04:28.326 SO libspdk_bdev_delay.so.6.0 00:04:28.326 SO libspdk_bdev_aio.so.6.0 00:04:28.326 SO libspdk_bdev_iscsi.so.6.0 00:04:28.326 SYMLINK libspdk_bdev_passthru.so 00:04:28.326 SYMLINK libspdk_bdev_ftl.so 00:04:28.326 LIB libspdk_bdev_lvol.a 00:04:28.326 SYMLINK libspdk_bdev_gpt.so 00:04:28.326 SO libspdk_bdev_lvol.so.6.0 00:04:28.326 SYMLINK libspdk_bdev_zone_block.so 00:04:28.326 SYMLINK libspdk_bdev_malloc.so 00:04:28.326 SYMLINK libspdk_bdev_aio.so 00:04:28.326 SYMLINK libspdk_bdev_delay.so 00:04:28.326 SYMLINK libspdk_bdev_iscsi.so 00:04:28.326 LIB libspdk_bdev_virtio.a 00:04:28.326 SO libspdk_bdev_virtio.so.6.0 00:04:28.326 SYMLINK libspdk_bdev_lvol.so 00:04:28.586 SYMLINK libspdk_bdev_virtio.so 00:04:28.586 LIB libspdk_bdev_raid.a 00:04:28.846 SO libspdk_bdev_raid.so.6.0 00:04:28.846 SYMLINK libspdk_bdev_raid.so 00:04:29.785 LIB libspdk_bdev_nvme.a 00:04:29.785 SO libspdk_bdev_nvme.so.7.1 00:04:30.044 SYMLINK libspdk_bdev_nvme.so 00:04:30.613 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:30.613 CC module/event/subsystems/keyring/keyring.o 00:04:30.613 CC module/event/subsystems/scheduler/scheduler.o 00:04:30.613 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:30.613 CC module/event/subsystems/vmd/vmd.o 00:04:30.613 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:30.613 CC module/event/subsystems/fsdev/fsdev.o 00:04:30.613 CC module/event/subsystems/iobuf/iobuf.o 00:04:30.613 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:30.613 CC module/event/subsystems/sock/sock.o 00:04:30.613 LIB libspdk_event_keyring.a 00:04:30.613 LIB libspdk_event_scheduler.a 00:04:30.613 LIB libspdk_event_vhost_blk.a 00:04:30.613 LIB libspdk_event_fsdev.a 00:04:30.613 LIB libspdk_event_vfu_tgt.a 00:04:30.613 LIB libspdk_event_vmd.a 00:04:30.613 SO libspdk_event_keyring.so.1.0 00:04:30.613 LIB libspdk_event_iobuf.a 00:04:30.613 LIB libspdk_event_sock.a 00:04:30.613 SO libspdk_event_vhost_blk.so.3.0 00:04:30.613 SO libspdk_event_vfu_tgt.so.3.0 00:04:30.613 SO libspdk_event_scheduler.so.4.0 00:04:30.613 SO libspdk_event_fsdev.so.1.0 00:04:30.875 SO libspdk_event_vmd.so.6.0 00:04:30.875 SO libspdk_event_sock.so.5.0 00:04:30.875 SO libspdk_event_iobuf.so.3.0 00:04:30.875 SYMLINK libspdk_event_keyring.so 00:04:30.875 SYMLINK libspdk_event_vhost_blk.so 00:04:30.875 SYMLINK libspdk_event_vfu_tgt.so 00:04:30.875 SYMLINK libspdk_event_scheduler.so 00:04:30.875 SYMLINK libspdk_event_fsdev.so 00:04:30.875 SYMLINK libspdk_event_vmd.so 00:04:30.875 SYMLINK libspdk_event_sock.so 00:04:30.875 SYMLINK libspdk_event_iobuf.so 00:04:31.138 CC module/event/subsystems/accel/accel.o 00:04:31.138 LIB libspdk_event_accel.a 00:04:31.138 SO libspdk_event_accel.so.6.0 00:04:31.397 SYMLINK libspdk_event_accel.so 00:04:31.657 CC module/event/subsystems/bdev/bdev.o 00:04:31.657 LIB libspdk_event_bdev.a 00:04:31.916 SO libspdk_event_bdev.so.6.0 00:04:31.916 SYMLINK libspdk_event_bdev.so 00:04:32.175 CC module/event/subsystems/scsi/scsi.o 00:04:32.175 CC module/event/subsystems/ublk/ublk.o 00:04:32.175 CC module/event/subsystems/nbd/nbd.o 00:04:32.175 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:32.175 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:32.435 LIB libspdk_event_ublk.a 00:04:32.435 LIB libspdk_event_nbd.a 00:04:32.435 LIB libspdk_event_scsi.a 00:04:32.435 SO libspdk_event_ublk.so.3.0 00:04:32.435 SO libspdk_event_nbd.so.6.0 00:04:32.435 SO libspdk_event_scsi.so.6.0 00:04:32.435 SYMLINK libspdk_event_ublk.so 00:04:32.435 SYMLINK libspdk_event_nbd.so 00:04:32.435 LIB libspdk_event_nvmf.a 00:04:32.435 SYMLINK libspdk_event_scsi.so 00:04:32.435 SO libspdk_event_nvmf.so.6.0 00:04:32.435 SYMLINK libspdk_event_nvmf.so 00:04:32.694 CC module/event/subsystems/iscsi/iscsi.o 00:04:32.695 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:32.695 LIB libspdk_event_iscsi.a 00:04:32.954 SO libspdk_event_iscsi.so.6.0 00:04:32.954 LIB libspdk_event_vhost_scsi.a 00:04:32.954 SO libspdk_event_vhost_scsi.so.3.0 00:04:32.954 SYMLINK libspdk_event_iscsi.so 00:04:32.954 SYMLINK libspdk_event_vhost_scsi.so 00:04:33.213 SO libspdk.so.6.0 00:04:33.213 SYMLINK libspdk.so 00:04:33.481 CC test/rpc_client/rpc_client_test.o 00:04:33.481 CXX app/trace/trace.o 00:04:33.481 TEST_HEADER include/spdk/accel.h 00:04:33.481 TEST_HEADER include/spdk/assert.h 00:04:33.481 TEST_HEADER include/spdk/accel_module.h 00:04:33.481 CC app/spdk_lspci/spdk_lspci.o 00:04:33.481 TEST_HEADER include/spdk/base64.h 00:04:33.481 TEST_HEADER include/spdk/barrier.h 00:04:33.481 CC app/spdk_nvme_identify/identify.o 00:04:33.481 CC app/spdk_top/spdk_top.o 00:04:33.481 CC app/trace_record/trace_record.o 00:04:33.481 TEST_HEADER include/spdk/bdev.h 00:04:33.481 TEST_HEADER include/spdk/bdev_module.h 00:04:33.481 TEST_HEADER include/spdk/bdev_zone.h 00:04:33.481 TEST_HEADER include/spdk/bit_array.h 00:04:33.481 TEST_HEADER include/spdk/blob_bdev.h 00:04:33.481 TEST_HEADER include/spdk/bit_pool.h 00:04:33.481 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:33.481 CC app/spdk_nvme_perf/perf.o 00:04:33.481 TEST_HEADER include/spdk/blob.h 00:04:33.481 TEST_HEADER include/spdk/blobfs.h 00:04:33.481 TEST_HEADER include/spdk/cpuset.h 00:04:33.481 TEST_HEADER include/spdk/conf.h 00:04:33.481 TEST_HEADER include/spdk/crc32.h 00:04:33.481 TEST_HEADER include/spdk/config.h 00:04:33.481 TEST_HEADER include/spdk/crc16.h 00:04:33.481 TEST_HEADER include/spdk/crc64.h 00:04:33.481 TEST_HEADER include/spdk/dif.h 00:04:33.481 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:33.481 TEST_HEADER include/spdk/env_dpdk.h 00:04:33.481 CC app/spdk_nvme_discover/discovery_aer.o 00:04:33.481 TEST_HEADER include/spdk/endian.h 00:04:33.481 TEST_HEADER include/spdk/env.h 00:04:33.481 TEST_HEADER include/spdk/dma.h 00:04:33.481 TEST_HEADER include/spdk/event.h 00:04:33.481 TEST_HEADER include/spdk/fd.h 00:04:33.481 TEST_HEADER include/spdk/file.h 00:04:33.481 TEST_HEADER include/spdk/fd_group.h 00:04:33.481 TEST_HEADER include/spdk/fsdev.h 00:04:33.481 TEST_HEADER include/spdk/fsdev_module.h 00:04:33.481 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:33.481 TEST_HEADER include/spdk/ftl.h 00:04:33.481 TEST_HEADER include/spdk/gpt_spec.h 00:04:33.481 TEST_HEADER include/spdk/hexlify.h 00:04:33.481 TEST_HEADER include/spdk/idxd.h 00:04:33.481 TEST_HEADER include/spdk/histogram_data.h 00:04:33.481 TEST_HEADER include/spdk/idxd_spec.h 00:04:33.481 TEST_HEADER include/spdk/init.h 00:04:33.481 TEST_HEADER include/spdk/ioat.h 00:04:33.481 TEST_HEADER include/spdk/ioat_spec.h 00:04:33.481 TEST_HEADER include/spdk/iscsi_spec.h 00:04:33.481 TEST_HEADER include/spdk/json.h 00:04:33.481 TEST_HEADER include/spdk/keyring.h 00:04:33.481 TEST_HEADER include/spdk/jsonrpc.h 00:04:33.481 TEST_HEADER include/spdk/keyring_module.h 00:04:33.481 TEST_HEADER include/spdk/likely.h 00:04:33.481 TEST_HEADER include/spdk/lvol.h 00:04:33.481 TEST_HEADER include/spdk/memory.h 00:04:33.481 TEST_HEADER include/spdk/mmio.h 00:04:33.481 TEST_HEADER include/spdk/md5.h 00:04:33.481 TEST_HEADER include/spdk/log.h 00:04:33.481 TEST_HEADER include/spdk/nbd.h 00:04:33.481 TEST_HEADER include/spdk/net.h 00:04:33.481 TEST_HEADER include/spdk/notify.h 00:04:33.481 TEST_HEADER include/spdk/nvme.h 00:04:33.481 TEST_HEADER include/spdk/nvme_intel.h 00:04:33.481 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:33.481 CC app/spdk_dd/spdk_dd.o 00:04:33.481 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:33.481 TEST_HEADER include/spdk/nvme_spec.h 00:04:33.481 TEST_HEADER include/spdk/nvme_zns.h 00:04:33.481 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:33.481 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:33.481 TEST_HEADER include/spdk/nvmf_transport.h 00:04:33.481 TEST_HEADER include/spdk/nvmf_spec.h 00:04:33.481 TEST_HEADER include/spdk/nvmf.h 00:04:33.481 TEST_HEADER include/spdk/opal.h 00:04:33.481 TEST_HEADER include/spdk/pipe.h 00:04:33.481 TEST_HEADER include/spdk/pci_ids.h 00:04:33.481 TEST_HEADER include/spdk/opal_spec.h 00:04:33.481 TEST_HEADER include/spdk/queue.h 00:04:33.481 TEST_HEADER include/spdk/rpc.h 00:04:33.481 CC app/iscsi_tgt/iscsi_tgt.o 00:04:33.481 TEST_HEADER include/spdk/scheduler.h 00:04:33.481 TEST_HEADER include/spdk/reduce.h 00:04:33.481 TEST_HEADER include/spdk/scsi.h 00:04:33.481 TEST_HEADER include/spdk/scsi_spec.h 00:04:33.481 TEST_HEADER include/spdk/sock.h 00:04:33.481 CC app/spdk_tgt/spdk_tgt.o 00:04:33.481 TEST_HEADER include/spdk/stdinc.h 00:04:33.481 TEST_HEADER include/spdk/string.h 00:04:33.481 TEST_HEADER include/spdk/thread.h 00:04:33.481 TEST_HEADER include/spdk/trace_parser.h 00:04:33.481 TEST_HEADER include/spdk/tree.h 00:04:33.481 TEST_HEADER include/spdk/trace.h 00:04:33.481 TEST_HEADER include/spdk/ublk.h 00:04:33.481 TEST_HEADER include/spdk/uuid.h 00:04:33.481 TEST_HEADER include/spdk/util.h 00:04:33.481 TEST_HEADER include/spdk/version.h 00:04:33.481 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:33.481 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:33.481 CC app/nvmf_tgt/nvmf_main.o 00:04:33.481 TEST_HEADER include/spdk/vmd.h 00:04:33.481 TEST_HEADER include/spdk/vhost.h 00:04:33.481 TEST_HEADER include/spdk/zipf.h 00:04:33.481 CXX test/cpp_headers/accel.o 00:04:33.481 TEST_HEADER include/spdk/xor.h 00:04:33.481 CXX test/cpp_headers/barrier.o 00:04:33.481 CXX test/cpp_headers/assert.o 00:04:33.481 CXX test/cpp_headers/accel_module.o 00:04:33.481 CXX test/cpp_headers/base64.o 00:04:33.481 CXX test/cpp_headers/bdev.o 00:04:33.481 CXX test/cpp_headers/bdev_module.o 00:04:33.481 CXX test/cpp_headers/bit_array.o 00:04:33.481 CXX test/cpp_headers/bit_pool.o 00:04:33.481 CXX test/cpp_headers/bdev_zone.o 00:04:33.481 CXX test/cpp_headers/blob_bdev.o 00:04:33.482 CXX test/cpp_headers/blobfs_bdev.o 00:04:33.482 CXX test/cpp_headers/blobfs.o 00:04:33.482 CXX test/cpp_headers/blob.o 00:04:33.482 CXX test/cpp_headers/conf.o 00:04:33.482 CXX test/cpp_headers/config.o 00:04:33.482 CXX test/cpp_headers/cpuset.o 00:04:33.482 CXX test/cpp_headers/crc64.o 00:04:33.482 CXX test/cpp_headers/crc32.o 00:04:33.482 CXX test/cpp_headers/crc16.o 00:04:33.482 CXX test/cpp_headers/dma.o 00:04:33.482 CXX test/cpp_headers/dif.o 00:04:33.482 CXX test/cpp_headers/env_dpdk.o 00:04:33.482 CXX test/cpp_headers/env.o 00:04:33.482 CXX test/cpp_headers/endian.o 00:04:33.482 CXX test/cpp_headers/fd_group.o 00:04:33.482 CXX test/cpp_headers/event.o 00:04:33.482 CXX test/cpp_headers/file.o 00:04:33.482 CXX test/cpp_headers/fd.o 00:04:33.482 CXX test/cpp_headers/fsdev_module.o 00:04:33.482 CXX test/cpp_headers/fsdev.o 00:04:33.482 CXX test/cpp_headers/ftl.o 00:04:33.482 CXX test/cpp_headers/fuse_dispatcher.o 00:04:33.482 CXX test/cpp_headers/gpt_spec.o 00:04:33.482 CXX test/cpp_headers/histogram_data.o 00:04:33.482 CXX test/cpp_headers/hexlify.o 00:04:33.482 CXX test/cpp_headers/idxd.o 00:04:33.482 CXX test/cpp_headers/idxd_spec.o 00:04:33.482 CXX test/cpp_headers/init.o 00:04:33.482 CXX test/cpp_headers/ioat.o 00:04:33.482 CXX test/cpp_headers/ioat_spec.o 00:04:33.482 CXX test/cpp_headers/json.o 00:04:33.482 CXX test/cpp_headers/iscsi_spec.o 00:04:33.482 CXX test/cpp_headers/jsonrpc.o 00:04:33.482 CXX test/cpp_headers/keyring.o 00:04:33.482 CXX test/cpp_headers/keyring_module.o 00:04:33.482 CXX test/cpp_headers/likely.o 00:04:33.482 CXX test/cpp_headers/log.o 00:04:33.482 CXX test/cpp_headers/md5.o 00:04:33.482 CXX test/cpp_headers/lvol.o 00:04:33.482 CXX test/cpp_headers/mmio.o 00:04:33.482 CXX test/cpp_headers/nbd.o 00:04:33.482 CXX test/cpp_headers/memory.o 00:04:33.482 CXX test/cpp_headers/net.o 00:04:33.482 CXX test/cpp_headers/notify.o 00:04:33.482 CXX test/cpp_headers/nvme.o 00:04:33.482 CXX test/cpp_headers/nvme_intel.o 00:04:33.482 CXX test/cpp_headers/nvme_ocssd.o 00:04:33.482 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:33.482 CXX test/cpp_headers/nvme_zns.o 00:04:33.482 CXX test/cpp_headers/nvme_spec.o 00:04:33.482 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:33.482 CXX test/cpp_headers/nvmf.o 00:04:33.482 CXX test/cpp_headers/nvmf_cmd.o 00:04:33.482 CXX test/cpp_headers/nvmf_spec.o 00:04:33.482 CXX test/cpp_headers/nvmf_transport.o 00:04:33.482 CC examples/util/zipf/zipf.o 00:04:33.482 CXX test/cpp_headers/opal.o 00:04:33.482 CC examples/ioat/verify/verify.o 00:04:33.751 CC examples/ioat/perf/perf.o 00:04:33.751 CC test/app/histogram_perf/histogram_perf.o 00:04:33.751 CC test/env/vtophys/vtophys.o 00:04:33.751 CC test/app/jsoncat/jsoncat.o 00:04:33.751 CC test/env/pci/pci_ut.o 00:04:33.751 CC test/env/memory/memory_ut.o 00:04:33.751 CC test/thread/poller_perf/poller_perf.o 00:04:33.751 CC test/app/stub/stub.o 00:04:33.751 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:33.751 CC app/fio/nvme/fio_plugin.o 00:04:33.751 CC test/app/bdev_svc/bdev_svc.o 00:04:33.751 CC app/fio/bdev/fio_plugin.o 00:04:33.751 LINK spdk_lspci 00:04:33.751 CC test/dma/test_dma/test_dma.o 00:04:34.015 LINK spdk_nvme_discover 00:04:34.015 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:34.015 LINK iscsi_tgt 00:04:34.015 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:34.015 LINK rpc_client_test 00:04:34.015 CC test/env/mem_callbacks/mem_callbacks.o 00:04:34.015 LINK interrupt_tgt 00:04:34.015 LINK histogram_perf 00:04:34.015 LINK poller_perf 00:04:34.015 LINK spdk_tgt 00:04:34.015 LINK zipf 00:04:34.015 CXX test/cpp_headers/opal_spec.o 00:04:34.275 LINK jsoncat 00:04:34.275 CXX test/cpp_headers/pci_ids.o 00:04:34.275 CXX test/cpp_headers/pipe.o 00:04:34.275 CXX test/cpp_headers/queue.o 00:04:34.275 CXX test/cpp_headers/rpc.o 00:04:34.275 CXX test/cpp_headers/scheduler.o 00:04:34.275 CXX test/cpp_headers/reduce.o 00:04:34.275 CXX test/cpp_headers/scsi.o 00:04:34.275 CXX test/cpp_headers/scsi_spec.o 00:04:34.275 CXX test/cpp_headers/sock.o 00:04:34.275 CXX test/cpp_headers/stdinc.o 00:04:34.275 CXX test/cpp_headers/string.o 00:04:34.275 CXX test/cpp_headers/thread.o 00:04:34.275 CXX test/cpp_headers/trace.o 00:04:34.275 CXX test/cpp_headers/trace_parser.o 00:04:34.275 CXX test/cpp_headers/util.o 00:04:34.275 LINK stub 00:04:34.275 CXX test/cpp_headers/tree.o 00:04:34.275 CXX test/cpp_headers/uuid.o 00:04:34.275 CXX test/cpp_headers/ublk.o 00:04:34.275 CXX test/cpp_headers/version.o 00:04:34.275 CXX test/cpp_headers/vfio_user_pci.o 00:04:34.275 LINK nvmf_tgt 00:04:34.275 CXX test/cpp_headers/vfio_user_spec.o 00:04:34.275 CXX test/cpp_headers/vmd.o 00:04:34.275 CXX test/cpp_headers/xor.o 00:04:34.275 CXX test/cpp_headers/vhost.o 00:04:34.275 CXX test/cpp_headers/zipf.o 00:04:34.275 LINK spdk_trace_record 00:04:34.275 LINK vtophys 00:04:34.275 LINK env_dpdk_post_init 00:04:34.275 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:34.275 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:34.275 LINK ioat_perf 00:04:34.275 LINK spdk_trace 00:04:34.275 LINK bdev_svc 00:04:34.275 LINK verify 00:04:34.533 LINK pci_ut 00:04:34.533 LINK spdk_dd 00:04:34.533 LINK spdk_nvme 00:04:34.533 LINK nvme_fuzz 00:04:34.533 CC test/event/reactor/reactor.o 00:04:34.533 LINK test_dma 00:04:34.533 CC examples/vmd/lsvmd/lsvmd.o 00:04:34.533 CC test/event/reactor_perf/reactor_perf.o 00:04:34.533 CC test/event/app_repeat/app_repeat.o 00:04:34.533 CC examples/vmd/led/led.o 00:04:34.533 CC examples/idxd/perf/perf.o 00:04:34.533 CC test/event/event_perf/event_perf.o 00:04:34.533 CC examples/sock/hello_world/hello_sock.o 00:04:34.792 CC test/event/scheduler/scheduler.o 00:04:34.792 CC examples/thread/thread/thread_ex.o 00:04:34.792 LINK spdk_bdev 00:04:34.792 LINK spdk_nvme_perf 00:04:34.792 CC app/vhost/vhost.o 00:04:34.792 LINK spdk_nvme_identify 00:04:34.792 LINK lsvmd 00:04:34.792 LINK reactor 00:04:34.792 LINK vhost_fuzz 00:04:34.792 LINK reactor_perf 00:04:34.792 LINK led 00:04:34.792 LINK event_perf 00:04:34.792 LINK app_repeat 00:04:34.792 LINK mem_callbacks 00:04:34.792 LINK spdk_top 00:04:34.792 LINK hello_sock 00:04:35.051 LINK idxd_perf 00:04:35.051 LINK scheduler 00:04:35.051 LINK vhost 00:04:35.051 LINK thread 00:04:35.051 CC test/nvme/connect_stress/connect_stress.o 00:04:35.051 CC test/nvme/aer/aer.o 00:04:35.051 CC test/nvme/startup/startup.o 00:04:35.051 CC test/nvme/reset/reset.o 00:04:35.051 CC test/nvme/fdp/fdp.o 00:04:35.051 CC test/nvme/fused_ordering/fused_ordering.o 00:04:35.051 CC test/nvme/reserve/reserve.o 00:04:35.051 CC test/nvme/sgl/sgl.o 00:04:35.051 CC test/nvme/boot_partition/boot_partition.o 00:04:35.051 CC test/nvme/cuse/cuse.o 00:04:35.051 CC test/nvme/e2edp/nvme_dp.o 00:04:35.051 CC test/nvme/overhead/overhead.o 00:04:35.051 CC test/nvme/simple_copy/simple_copy.o 00:04:35.051 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:35.051 CC test/nvme/err_injection/err_injection.o 00:04:35.051 CC test/nvme/compliance/nvme_compliance.o 00:04:35.051 CC test/blobfs/mkfs/mkfs.o 00:04:35.051 CC test/accel/dif/dif.o 00:04:35.051 LINK memory_ut 00:04:35.310 CC test/lvol/esnap/esnap.o 00:04:35.310 LINK connect_stress 00:04:35.310 LINK boot_partition 00:04:35.310 LINK startup 00:04:35.310 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:35.310 LINK fused_ordering 00:04:35.310 LINK doorbell_aers 00:04:35.310 CC examples/nvme/hello_world/hello_world.o 00:04:35.310 CC examples/nvme/hotplug/hotplug.o 00:04:35.310 CC examples/nvme/arbitration/arbitration.o 00:04:35.310 LINK err_injection 00:04:35.310 LINK reserve 00:04:35.310 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:35.310 CC examples/nvme/reconnect/reconnect.o 00:04:35.310 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:35.310 CC examples/nvme/abort/abort.o 00:04:35.310 LINK simple_copy 00:04:35.310 LINK mkfs 00:04:35.310 LINK nvme_dp 00:04:35.310 LINK reset 00:04:35.310 LINK sgl 00:04:35.310 LINK aer 00:04:35.310 LINK overhead 00:04:35.310 CC examples/accel/perf/accel_perf.o 00:04:35.310 LINK fdp 00:04:35.310 LINK nvme_compliance 00:04:35.310 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:35.569 CC examples/blob/cli/blobcli.o 00:04:35.569 CC examples/blob/hello_world/hello_blob.o 00:04:35.569 LINK cmb_copy 00:04:35.569 LINK hello_world 00:04:35.569 LINK pmr_persistence 00:04:35.569 LINK hotplug 00:04:35.569 LINK iscsi_fuzz 00:04:35.569 LINK arbitration 00:04:35.569 LINK abort 00:04:35.569 LINK reconnect 00:04:35.569 LINK hello_blob 00:04:35.829 LINK dif 00:04:35.829 LINK hello_fsdev 00:04:35.829 LINK nvme_manage 00:04:35.829 LINK accel_perf 00:04:35.829 LINK blobcli 00:04:36.092 LINK cuse 00:04:36.092 CC test/bdev/bdevio/bdevio.o 00:04:36.351 CC examples/bdev/hello_world/hello_bdev.o 00:04:36.351 CC examples/bdev/bdevperf/bdevperf.o 00:04:36.611 LINK bdevio 00:04:36.611 LINK hello_bdev 00:04:36.869 LINK bdevperf 00:04:37.436 CC examples/nvmf/nvmf/nvmf.o 00:04:37.695 LINK nvmf 00:04:38.631 LINK esnap 00:04:39.199 00:04:39.199 real 0m54.847s 00:04:39.199 user 7m59.568s 00:04:39.199 sys 3m37.482s 00:04:39.199 07:14:06 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:39.199 07:14:06 make -- common/autotest_common.sh@10 -- $ set +x 00:04:39.199 ************************************ 00:04:39.199 END TEST make 00:04:39.199 ************************************ 00:04:39.199 07:14:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:39.199 07:14:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:39.200 07:14:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:39.200 07:14:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.200 07:14:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:39.200 07:14:07 -- pm/common@44 -- $ pid=465449 00:04:39.200 07:14:07 -- pm/common@50 -- $ kill -TERM 465449 00:04:39.200 07:14:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.200 07:14:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:39.200 07:14:07 -- pm/common@44 -- $ pid=465451 00:04:39.200 07:14:07 -- pm/common@50 -- $ kill -TERM 465451 00:04:39.200 07:14:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.200 07:14:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:39.200 07:14:07 -- pm/common@44 -- $ pid=465453 00:04:39.200 07:14:07 -- pm/common@50 -- $ kill -TERM 465453 00:04:39.200 07:14:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.200 07:14:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:39.200 07:14:07 -- pm/common@44 -- $ pid=465479 00:04:39.200 07:14:07 -- pm/common@50 -- $ sudo -E kill -TERM 465479 00:04:39.200 07:14:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:39.200 07:14:07 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:39.200 07:14:07 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.200 07:14:07 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.200 07:14:07 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.200 07:14:07 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.200 07:14:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.200 07:14:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.200 07:14:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.200 07:14:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.200 07:14:07 -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.200 07:14:07 -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.200 07:14:07 -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.200 07:14:07 -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.200 07:14:07 -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.200 07:14:07 -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.200 07:14:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.200 07:14:07 -- scripts/common.sh@344 -- # case "$op" in 00:04:39.200 07:14:07 -- scripts/common.sh@345 -- # : 1 00:04:39.200 07:14:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.200 07:14:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.200 07:14:07 -- scripts/common.sh@365 -- # decimal 1 00:04:39.200 07:14:07 -- scripts/common.sh@353 -- # local d=1 00:04:39.200 07:14:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.200 07:14:07 -- scripts/common.sh@355 -- # echo 1 00:04:39.200 07:14:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.200 07:14:07 -- scripts/common.sh@366 -- # decimal 2 00:04:39.200 07:14:07 -- scripts/common.sh@353 -- # local d=2 00:04:39.200 07:14:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.200 07:14:07 -- scripts/common.sh@355 -- # echo 2 00:04:39.200 07:14:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.200 07:14:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.200 07:14:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.200 07:14:07 -- scripts/common.sh@368 -- # return 0 00:04:39.200 07:14:07 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.200 07:14:07 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.200 --rc genhtml_branch_coverage=1 00:04:39.200 --rc genhtml_function_coverage=1 00:04:39.200 --rc genhtml_legend=1 00:04:39.200 --rc geninfo_all_blocks=1 00:04:39.200 --rc geninfo_unexecuted_blocks=1 00:04:39.200 00:04:39.200 ' 00:04:39.200 07:14:07 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.200 --rc genhtml_branch_coverage=1 00:04:39.200 --rc genhtml_function_coverage=1 00:04:39.200 --rc genhtml_legend=1 00:04:39.200 --rc geninfo_all_blocks=1 00:04:39.200 --rc geninfo_unexecuted_blocks=1 00:04:39.200 00:04:39.200 ' 00:04:39.200 07:14:07 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.200 --rc genhtml_branch_coverage=1 00:04:39.200 --rc genhtml_function_coverage=1 00:04:39.200 --rc genhtml_legend=1 00:04:39.200 --rc geninfo_all_blocks=1 00:04:39.200 --rc geninfo_unexecuted_blocks=1 00:04:39.200 00:04:39.200 ' 00:04:39.200 07:14:07 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.200 --rc genhtml_branch_coverage=1 00:04:39.200 --rc genhtml_function_coverage=1 00:04:39.200 --rc genhtml_legend=1 00:04:39.200 --rc geninfo_all_blocks=1 00:04:39.200 --rc geninfo_unexecuted_blocks=1 00:04:39.200 00:04:39.200 ' 00:04:39.200 07:14:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.200 07:14:07 -- nvmf/common.sh@7 -- # uname -s 00:04:39.200 07:14:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.200 07:14:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.200 07:14:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.200 07:14:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.200 07:14:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.200 07:14:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.200 07:14:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.200 07:14:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.200 07:14:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.200 07:14:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.200 07:14:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:39.200 07:14:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:39.200 07:14:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.200 07:14:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.200 07:14:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:39.200 07:14:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.200 07:14:07 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:39.200 07:14:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.200 07:14:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.200 07:14:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.200 07:14:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.200 07:14:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.200 07:14:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.200 07:14:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.200 07:14:07 -- paths/export.sh@5 -- # export PATH 00:04:39.200 07:14:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.200 07:14:07 -- nvmf/common.sh@51 -- # : 0 00:04:39.200 07:14:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.200 07:14:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.200 07:14:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.200 07:14:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.200 07:14:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.200 07:14:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.200 07:14:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.200 07:14:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.200 07:14:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.200 07:14:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:39.200 07:14:07 -- spdk/autotest.sh@32 -- # uname -s 00:04:39.200 07:14:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:39.200 07:14:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:39.200 07:14:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:39.200 07:14:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:39.200 07:14:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:39.200 07:14:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:39.200 07:14:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:39.200 07:14:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:39.200 07:14:07 -- spdk/autotest.sh@48 -- # udevadm_pid=527689 00:04:39.200 07:14:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:39.200 07:14:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:39.200 07:14:07 -- pm/common@17 -- # local monitor 00:04:39.200 07:14:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.200 07:14:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.200 07:14:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.200 07:14:07 -- pm/common@21 -- # date +%s 00:04:39.200 07:14:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.200 07:14:07 -- pm/common@21 -- # date +%s 00:04:39.201 07:14:07 -- pm/common@25 -- # sleep 1 00:04:39.201 07:14:07 -- pm/common@21 -- # date +%s 00:04:39.201 07:14:07 -- pm/common@21 -- # date +%s 00:04:39.201 07:14:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601647 00:04:39.201 07:14:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601647 00:04:39.201 07:14:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601647 00:04:39.201 07:14:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601647 00:04:39.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601647_collect-vmstat.pm.log 00:04:39.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601647_collect-cpu-load.pm.log 00:04:39.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601647_collect-cpu-temp.pm.log 00:04:39.459 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601647_collect-bmc-pm.bmc.pm.log 00:04:40.396 07:14:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:40.396 07:14:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:40.396 07:14:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.396 07:14:08 -- common/autotest_common.sh@10 -- # set +x 00:04:40.396 07:14:08 -- spdk/autotest.sh@59 -- # create_test_list 00:04:40.396 07:14:08 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:40.396 07:14:08 -- common/autotest_common.sh@10 -- # set +x 00:04:40.396 07:14:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:40.396 07:14:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:40.396 07:14:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:40.396 07:14:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:40.396 07:14:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:40.396 07:14:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:40.396 07:14:08 -- common/autotest_common.sh@1457 -- # uname 00:04:40.396 07:14:08 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:40.396 07:14:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:40.396 07:14:08 -- common/autotest_common.sh@1477 -- # uname 00:04:40.396 07:14:08 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:40.396 07:14:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:40.396 07:14:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:40.396 lcov: LCOV version 1.15 00:04:40.396 07:14:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:52.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:52.617 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:07.501 07:14:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:07.501 07:14:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.501 07:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:07.501 07:14:33 -- spdk/autotest.sh@78 -- # rm -f 00:05:07.501 07:14:33 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:08.068 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:05:08.068 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:08.068 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:08.068 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:08.068 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:08.068 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:08.068 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:08.068 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:08.068 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:08.326 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:08.326 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:08.326 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:08.326 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:08.326 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:08.326 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:08.326 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:08.326 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:08.326 07:14:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:08.326 07:14:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:08.326 07:14:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:08.326 07:14:36 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:08.326 07:14:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:08.326 07:14:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:08.326 07:14:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:08.326 07:14:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:08.326 07:14:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:08.326 07:14:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:08.326 07:14:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:08.326 07:14:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:08.326 07:14:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:08.326 07:14:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:08.326 07:14:36 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:08.586 No valid GPT data, bailing 00:05:08.586 07:14:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:08.586 07:14:36 -- scripts/common.sh@394 -- # pt= 00:05:08.586 07:14:36 -- scripts/common.sh@395 -- # return 1 00:05:08.586 07:14:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:08.586 1+0 records in 00:05:08.586 1+0 records out 00:05:08.586 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00583969 s, 180 MB/s 00:05:08.586 07:14:36 -- spdk/autotest.sh@105 -- # sync 00:05:08.586 07:14:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:08.586 07:14:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:08.586 07:14:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:13.851 07:14:41 -- spdk/autotest.sh@111 -- # uname -s 00:05:13.851 07:14:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:13.851 07:14:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:13.851 07:14:41 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:17.137 Hugepages 00:05:17.137 node hugesize free / total 00:05:17.137 node0 1048576kB 0 / 0 00:05:17.137 node0 2048kB 1024 / 1024 00:05:17.137 node1 1048576kB 0 / 0 00:05:17.137 node1 2048kB 1024 / 1024 00:05:17.137 00:05:17.137 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:17.137 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:17.137 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:17.137 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:17.137 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:17.137 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:17.137 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:17.137 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:17.137 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:17.137 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:17.137 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:17.137 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:17.137 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:17.137 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:17.137 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:17.137 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:17.137 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:17.137 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:17.137 07:14:44 -- spdk/autotest.sh@117 -- # uname -s 00:05:17.137 07:14:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:17.137 07:14:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:17.137 07:14:44 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:19.671 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:19.671 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:19.672 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:20.609 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:20.609 07:14:48 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:21.985 07:14:49 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:21.985 07:14:49 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:21.985 07:14:49 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:21.985 07:14:49 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:21.985 07:14:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:21.985 07:14:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:21.985 07:14:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:21.985 07:14:49 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:21.985 07:14:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:21.985 07:14:49 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:21.985 07:14:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:21.985 07:14:49 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:24.522 Waiting for block devices as requested 00:05:24.522 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:24.522 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:24.522 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:24.522 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:24.522 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:24.781 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:24.781 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:24.781 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:25.042 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:25.042 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:25.042 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:25.042 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:25.302 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:25.302 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:25.302 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:25.302 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:25.562 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:25.562 07:14:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:25.562 07:14:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:25.562 07:14:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:25.562 07:14:53 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:05:25.562 07:14:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:25.562 07:14:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:25.562 07:14:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:25.562 07:14:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:25.562 07:14:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:25.562 07:14:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:25.562 07:14:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:25.562 07:14:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:25.562 07:14:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:25.562 07:14:53 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:25.562 07:14:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:25.562 07:14:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:25.562 07:14:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:25.562 07:14:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:25.562 07:14:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:25.562 07:14:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:25.562 07:14:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:25.562 07:14:53 -- common/autotest_common.sh@1543 -- # continue 00:05:25.562 07:14:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:25.562 07:14:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.562 07:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:25.562 07:14:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:25.562 07:14:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.562 07:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:25.562 07:14:53 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:28.854 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:28.854 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:29.422 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:29.422 07:14:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:29.422 07:14:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.422 07:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:29.422 07:14:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:29.422 07:14:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:29.422 07:14:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.422 07:14:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:29.422 07:14:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:29.422 07:14:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:29.422 07:14:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:29.422 07:14:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:29.422 07:14:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:29.422 07:14:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:29.422 07:14:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.422 07:14:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:29.422 07:14:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:29.422 07:14:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:29.422 07:14:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:29.422 07:14:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:29.422 07:14:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:29.422 07:14:57 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:29.422 07:14:57 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:29.422 07:14:57 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:29.422 07:14:57 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:29.422 07:14:57 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:05:29.422 07:14:57 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:05:29.422 07:14:57 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=542118 00:05:29.422 07:14:57 -- common/autotest_common.sh@1585 -- # waitforlisten 542118 00:05:29.422 07:14:57 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.422 07:14:57 -- common/autotest_common.sh@835 -- # '[' -z 542118 ']' 00:05:29.422 07:14:57 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.422 07:14:57 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.422 07:14:57 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.422 07:14:57 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.422 07:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:29.681 [2024-11-26 07:14:57.553188] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:05:29.681 [2024-11-26 07:14:57.553235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid542118 ] 00:05:29.681 [2024-11-26 07:14:57.615877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.681 [2024-11-26 07:14:57.656176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.940 07:14:57 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.940 07:14:57 -- common/autotest_common.sh@868 -- # return 0 00:05:29.940 07:14:57 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:29.940 07:14:57 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:29.940 07:14:57 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:33.227 nvme0n1 00:05:33.227 07:15:00 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:33.227 [2024-11-26 07:15:01.029733] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:33.227 request: 00:05:33.227 { 00:05:33.227 "nvme_ctrlr_name": "nvme0", 00:05:33.227 "password": "test", 00:05:33.227 "method": "bdev_nvme_opal_revert", 00:05:33.227 "req_id": 1 00:05:33.227 } 00:05:33.227 Got JSON-RPC error response 00:05:33.227 response: 00:05:33.227 { 00:05:33.227 "code": -32602, 00:05:33.227 "message": "Invalid parameters" 00:05:33.227 } 00:05:33.227 07:15:01 -- common/autotest_common.sh@1591 -- # true 00:05:33.227 07:15:01 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:33.227 07:15:01 -- common/autotest_common.sh@1595 -- # killprocess 542118 00:05:33.227 07:15:01 -- common/autotest_common.sh@954 -- # '[' -z 542118 ']' 00:05:33.227 07:15:01 -- common/autotest_common.sh@958 -- # kill -0 542118 00:05:33.227 07:15:01 -- common/autotest_common.sh@959 -- # uname 00:05:33.227 07:15:01 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.227 07:15:01 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 542118 00:05:33.227 07:15:01 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.227 07:15:01 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.227 07:15:01 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 542118' 00:05:33.227 killing process with pid 542118 00:05:33.227 07:15:01 -- common/autotest_common.sh@973 -- # kill 542118 00:05:33.227 07:15:01 -- common/autotest_common.sh@978 -- # wait 542118 00:05:35.133 07:15:02 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:35.133 07:15:02 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:35.133 07:15:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:35.133 07:15:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:35.133 07:15:02 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:35.133 07:15:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.133 07:15:02 -- common/autotest_common.sh@10 -- # set +x 00:05:35.133 07:15:02 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:35.133 07:15:02 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:35.133 07:15:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.133 07:15:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.133 07:15:02 -- common/autotest_common.sh@10 -- # set +x 00:05:35.133 ************************************ 00:05:35.133 START TEST env 00:05:35.133 ************************************ 00:05:35.133 07:15:02 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:35.133 * Looking for test storage... 00:05:35.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:35.133 07:15:02 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.133 07:15:02 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.133 07:15:02 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.133 07:15:02 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.133 07:15:02 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.133 07:15:02 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.133 07:15:02 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.133 07:15:02 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.133 07:15:02 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.133 07:15:02 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.133 07:15:02 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.133 07:15:02 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.133 07:15:02 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.133 07:15:02 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.133 07:15:02 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.133 07:15:02 env -- scripts/common.sh@344 -- # case "$op" in 00:05:35.133 07:15:02 env -- scripts/common.sh@345 -- # : 1 00:05:35.133 07:15:02 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.133 07:15:02 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.133 07:15:02 env -- scripts/common.sh@365 -- # decimal 1 00:05:35.133 07:15:02 env -- scripts/common.sh@353 -- # local d=1 00:05:35.133 07:15:02 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.133 07:15:02 env -- scripts/common.sh@355 -- # echo 1 00:05:35.133 07:15:02 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.133 07:15:02 env -- scripts/common.sh@366 -- # decimal 2 00:05:35.133 07:15:02 env -- scripts/common.sh@353 -- # local d=2 00:05:35.133 07:15:02 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.133 07:15:02 env -- scripts/common.sh@355 -- # echo 2 00:05:35.133 07:15:02 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.133 07:15:02 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.133 07:15:02 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.133 07:15:02 env -- scripts/common.sh@368 -- # return 0 00:05:35.133 07:15:02 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.133 07:15:02 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.133 --rc genhtml_branch_coverage=1 00:05:35.133 --rc genhtml_function_coverage=1 00:05:35.133 --rc genhtml_legend=1 00:05:35.133 --rc geninfo_all_blocks=1 00:05:35.133 --rc geninfo_unexecuted_blocks=1 00:05:35.133 00:05:35.133 ' 00:05:35.133 07:15:02 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.133 --rc genhtml_branch_coverage=1 00:05:35.133 --rc genhtml_function_coverage=1 00:05:35.133 --rc genhtml_legend=1 00:05:35.133 --rc geninfo_all_blocks=1 00:05:35.133 --rc geninfo_unexecuted_blocks=1 00:05:35.133 00:05:35.133 ' 00:05:35.133 07:15:02 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.133 --rc genhtml_branch_coverage=1 00:05:35.133 --rc genhtml_function_coverage=1 00:05:35.133 --rc genhtml_legend=1 00:05:35.134 --rc geninfo_all_blocks=1 00:05:35.134 --rc geninfo_unexecuted_blocks=1 00:05:35.134 00:05:35.134 ' 00:05:35.134 07:15:02 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.134 --rc genhtml_branch_coverage=1 00:05:35.134 --rc genhtml_function_coverage=1 00:05:35.134 --rc genhtml_legend=1 00:05:35.134 --rc geninfo_all_blocks=1 00:05:35.134 --rc geninfo_unexecuted_blocks=1 00:05:35.134 00:05:35.134 ' 00:05:35.134 07:15:02 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.134 07:15:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.134 07:15:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.134 07:15:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.134 ************************************ 00:05:35.134 START TEST env_memory 00:05:35.134 ************************************ 00:05:35.134 07:15:02 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.134 00:05:35.134 00:05:35.134 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.134 http://cunit.sourceforge.net/ 00:05:35.134 00:05:35.134 00:05:35.134 Suite: memory 00:05:35.134 Test: alloc and free memory map ...[2024-11-26 07:15:03.017314] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:35.134 passed 00:05:35.134 Test: mem map translation ...[2024-11-26 07:15:03.036605] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:35.134 [2024-11-26 07:15:03.036619] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:35.134 [2024-11-26 07:15:03.036655] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:35.134 [2024-11-26 07:15:03.036661] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:35.134 passed 00:05:35.134 Test: mem map registration ...[2024-11-26 07:15:03.073715] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:35.134 [2024-11-26 07:15:03.073728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:35.134 passed 00:05:35.134 Test: mem map adjacent registrations ...passed 00:05:35.134 00:05:35.134 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.134 suites 1 1 n/a 0 0 00:05:35.134 tests 4 4 4 0 0 00:05:35.134 asserts 152 152 152 0 n/a 00:05:35.134 00:05:35.134 Elapsed time = 0.137 seconds 00:05:35.134 00:05:35.134 real 0m0.150s 00:05:35.134 user 0m0.142s 00:05:35.134 sys 0m0.008s 00:05:35.134 07:15:03 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.134 07:15:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:35.134 ************************************ 00:05:35.134 END TEST env_memory 00:05:35.134 ************************************ 00:05:35.134 07:15:03 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.134 07:15:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.134 07:15:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.134 07:15:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.134 ************************************ 00:05:35.134 START TEST env_vtophys 00:05:35.134 ************************************ 00:05:35.134 07:15:03 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.134 EAL: lib.eal log level changed from notice to debug 00:05:35.134 EAL: Detected lcore 0 as core 0 on socket 0 00:05:35.134 EAL: Detected lcore 1 as core 1 on socket 0 00:05:35.134 EAL: Detected lcore 2 as core 2 on socket 0 00:05:35.134 EAL: Detected lcore 3 as core 3 on socket 0 00:05:35.134 EAL: Detected lcore 4 as core 4 on socket 0 00:05:35.134 EAL: Detected lcore 5 as core 5 on socket 0 00:05:35.134 EAL: Detected lcore 6 as core 6 on socket 0 00:05:35.134 EAL: Detected lcore 7 as core 8 on socket 0 00:05:35.134 EAL: Detected lcore 8 as core 9 on socket 0 00:05:35.134 EAL: Detected lcore 9 as core 10 on socket 0 00:05:35.134 EAL: Detected lcore 10 as core 11 on socket 0 00:05:35.134 EAL: Detected lcore 11 as core 12 on socket 0 00:05:35.134 EAL: Detected lcore 12 as core 13 on socket 0 00:05:35.134 EAL: Detected lcore 13 as core 16 on socket 0 00:05:35.134 EAL: Detected lcore 14 as core 17 on socket 0 00:05:35.134 EAL: Detected lcore 15 as core 18 on socket 0 00:05:35.134 EAL: Detected lcore 16 as core 19 on socket 0 00:05:35.134 EAL: Detected lcore 17 as core 20 on socket 0 00:05:35.134 EAL: Detected lcore 18 as core 21 on socket 0 00:05:35.134 EAL: Detected lcore 19 as core 25 on socket 0 00:05:35.134 EAL: Detected lcore 20 as core 26 on socket 0 00:05:35.134 EAL: Detected lcore 21 as core 27 on socket 0 00:05:35.134 EAL: Detected lcore 22 as core 28 on socket 0 00:05:35.134 EAL: Detected lcore 23 as core 29 on socket 0 00:05:35.134 EAL: Detected lcore 24 as core 0 on socket 1 00:05:35.134 EAL: Detected lcore 25 as core 1 on socket 1 00:05:35.134 EAL: Detected lcore 26 as core 2 on socket 1 00:05:35.134 EAL: Detected lcore 27 as core 3 on socket 1 00:05:35.134 EAL: Detected lcore 28 as core 4 on socket 1 00:05:35.134 EAL: Detected lcore 29 as core 5 on socket 1 00:05:35.134 EAL: Detected lcore 30 as core 6 on socket 1 00:05:35.134 EAL: Detected lcore 31 as core 9 on socket 1 00:05:35.134 EAL: Detected lcore 32 as core 10 on socket 1 00:05:35.134 EAL: Detected lcore 33 as core 11 on socket 1 00:05:35.134 EAL: Detected lcore 34 as core 12 on socket 1 00:05:35.134 EAL: Detected lcore 35 as core 13 on socket 1 00:05:35.134 EAL: Detected lcore 36 as core 16 on socket 1 00:05:35.134 EAL: Detected lcore 37 as core 17 on socket 1 00:05:35.134 EAL: Detected lcore 38 as core 18 on socket 1 00:05:35.134 EAL: Detected lcore 39 as core 19 on socket 1 00:05:35.134 EAL: Detected lcore 40 as core 20 on socket 1 00:05:35.134 EAL: Detected lcore 41 as core 21 on socket 1 00:05:35.134 EAL: Detected lcore 42 as core 24 on socket 1 00:05:35.134 EAL: Detected lcore 43 as core 25 on socket 1 00:05:35.134 EAL: Detected lcore 44 as core 26 on socket 1 00:05:35.134 EAL: Detected lcore 45 as core 27 on socket 1 00:05:35.134 EAL: Detected lcore 46 as core 28 on socket 1 00:05:35.134 EAL: Detected lcore 47 as core 29 on socket 1 00:05:35.134 EAL: Detected lcore 48 as core 0 on socket 0 00:05:35.134 EAL: Detected lcore 49 as core 1 on socket 0 00:05:35.134 EAL: Detected lcore 50 as core 2 on socket 0 00:05:35.134 EAL: Detected lcore 51 as core 3 on socket 0 00:05:35.134 EAL: Detected lcore 52 as core 4 on socket 0 00:05:35.134 EAL: Detected lcore 53 as core 5 on socket 0 00:05:35.134 EAL: Detected lcore 54 as core 6 on socket 0 00:05:35.134 EAL: Detected lcore 55 as core 8 on socket 0 00:05:35.134 EAL: Detected lcore 56 as core 9 on socket 0 00:05:35.134 EAL: Detected lcore 57 as core 10 on socket 0 00:05:35.134 EAL: Detected lcore 58 as core 11 on socket 0 00:05:35.134 EAL: Detected lcore 59 as core 12 on socket 0 00:05:35.134 EAL: Detected lcore 60 as core 13 on socket 0 00:05:35.134 EAL: Detected lcore 61 as core 16 on socket 0 00:05:35.134 EAL: Detected lcore 62 as core 17 on socket 0 00:05:35.134 EAL: Detected lcore 63 as core 18 on socket 0 00:05:35.134 EAL: Detected lcore 64 as core 19 on socket 0 00:05:35.134 EAL: Detected lcore 65 as core 20 on socket 0 00:05:35.134 EAL: Detected lcore 66 as core 21 on socket 0 00:05:35.134 EAL: Detected lcore 67 as core 25 on socket 0 00:05:35.135 EAL: Detected lcore 68 as core 26 on socket 0 00:05:35.135 EAL: Detected lcore 69 as core 27 on socket 0 00:05:35.135 EAL: Detected lcore 70 as core 28 on socket 0 00:05:35.135 EAL: Detected lcore 71 as core 29 on socket 0 00:05:35.135 EAL: Detected lcore 72 as core 0 on socket 1 00:05:35.135 EAL: Detected lcore 73 as core 1 on socket 1 00:05:35.135 EAL: Detected lcore 74 as core 2 on socket 1 00:05:35.135 EAL: Detected lcore 75 as core 3 on socket 1 00:05:35.135 EAL: Detected lcore 76 as core 4 on socket 1 00:05:35.135 EAL: Detected lcore 77 as core 5 on socket 1 00:05:35.135 EAL: Detected lcore 78 as core 6 on socket 1 00:05:35.135 EAL: Detected lcore 79 as core 9 on socket 1 00:05:35.135 EAL: Detected lcore 80 as core 10 on socket 1 00:05:35.135 EAL: Detected lcore 81 as core 11 on socket 1 00:05:35.135 EAL: Detected lcore 82 as core 12 on socket 1 00:05:35.135 EAL: Detected lcore 83 as core 13 on socket 1 00:05:35.135 EAL: Detected lcore 84 as core 16 on socket 1 00:05:35.135 EAL: Detected lcore 85 as core 17 on socket 1 00:05:35.135 EAL: Detected lcore 86 as core 18 on socket 1 00:05:35.135 EAL: Detected lcore 87 as core 19 on socket 1 00:05:35.135 EAL: Detected lcore 88 as core 20 on socket 1 00:05:35.135 EAL: Detected lcore 89 as core 21 on socket 1 00:05:35.135 EAL: Detected lcore 90 as core 24 on socket 1 00:05:35.135 EAL: Detected lcore 91 as core 25 on socket 1 00:05:35.135 EAL: Detected lcore 92 as core 26 on socket 1 00:05:35.135 EAL: Detected lcore 93 as core 27 on socket 1 00:05:35.135 EAL: Detected lcore 94 as core 28 on socket 1 00:05:35.135 EAL: Detected lcore 95 as core 29 on socket 1 00:05:35.135 EAL: Maximum logical cores by configuration: 128 00:05:35.135 EAL: Detected CPU lcores: 96 00:05:35.135 EAL: Detected NUMA nodes: 2 00:05:35.135 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:35.135 EAL: Detected shared linkage of DPDK 00:05:35.135 EAL: No shared files mode enabled, IPC will be disabled 00:05:35.395 EAL: Bus pci wants IOVA as 'DC' 00:05:35.395 EAL: Buses did not request a specific IOVA mode. 00:05:35.395 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:35.395 EAL: Selected IOVA mode 'VA' 00:05:35.395 EAL: Probing VFIO support... 00:05:35.395 EAL: IOMMU type 1 (Type 1) is supported 00:05:35.395 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:35.395 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:35.395 EAL: VFIO support initialized 00:05:35.395 EAL: Ask a virtual area of 0x2e000 bytes 00:05:35.395 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:35.395 EAL: Setting up physically contiguous memory... 00:05:35.395 EAL: Setting maximum number of open files to 524288 00:05:35.395 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:35.395 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:35.395 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:35.395 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.395 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:35.395 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.395 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.395 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:35.395 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:35.395 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.395 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:35.395 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.395 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.395 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:35.395 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:35.395 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.395 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:35.395 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.395 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.395 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:35.395 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:35.395 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.395 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:35.395 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.395 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.395 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:35.395 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:35.395 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:35.395 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.395 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:35.395 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.395 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.395 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:35.395 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:35.395 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.395 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:35.395 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.395 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.395 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:35.395 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:35.395 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.395 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:35.395 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.395 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.395 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:35.395 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:35.395 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.395 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:35.395 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.395 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.395 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:35.395 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:35.395 EAL: Hugepages will be freed exactly as allocated. 00:05:35.395 EAL: No shared files mode enabled, IPC is disabled 00:05:35.395 EAL: No shared files mode enabled, IPC is disabled 00:05:35.395 EAL: TSC frequency is ~2300000 KHz 00:05:35.395 EAL: Main lcore 0 is ready (tid=7f34d3cf3a00;cpuset=[0]) 00:05:35.395 EAL: Trying to obtain current memory policy. 00:05:35.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.395 EAL: Restoring previous memory policy: 0 00:05:35.395 EAL: request: mp_malloc_sync 00:05:35.395 EAL: No shared files mode enabled, IPC is disabled 00:05:35.395 EAL: Heap on socket 0 was expanded by 2MB 00:05:35.395 EAL: No shared files mode enabled, IPC is disabled 00:05:35.395 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:35.395 EAL: Mem event callback 'spdk:(nil)' registered 00:05:35.395 00:05:35.395 00:05:35.395 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.395 http://cunit.sourceforge.net/ 00:05:35.395 00:05:35.395 00:05:35.395 Suite: components_suite 00:05:35.395 Test: vtophys_malloc_test ...passed 00:05:35.395 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:35.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.395 EAL: Restoring previous memory policy: 4 00:05:35.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.395 EAL: request: mp_malloc_sync 00:05:35.395 EAL: No shared files mode enabled, IPC is disabled 00:05:35.395 EAL: Heap on socket 0 was expanded by 4MB 00:05:35.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.395 EAL: request: mp_malloc_sync 00:05:35.395 EAL: No shared files mode enabled, IPC is disabled 00:05:35.395 EAL: Heap on socket 0 was shrunk by 4MB 00:05:35.395 EAL: Trying to obtain current memory policy. 00:05:35.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.395 EAL: Restoring previous memory policy: 4 00:05:35.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.395 EAL: request: mp_malloc_sync 00:05:35.395 EAL: No shared files mode enabled, IPC is disabled 00:05:35.395 EAL: Heap on socket 0 was expanded by 6MB 00:05:35.395 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was shrunk by 6MB 00:05:35.396 EAL: Trying to obtain current memory policy. 00:05:35.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.396 EAL: Restoring previous memory policy: 4 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was expanded by 10MB 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was shrunk by 10MB 00:05:35.396 EAL: Trying to obtain current memory policy. 00:05:35.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.396 EAL: Restoring previous memory policy: 4 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was expanded by 18MB 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was shrunk by 18MB 00:05:35.396 EAL: Trying to obtain current memory policy. 00:05:35.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.396 EAL: Restoring previous memory policy: 4 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was expanded by 34MB 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was shrunk by 34MB 00:05:35.396 EAL: Trying to obtain current memory policy. 00:05:35.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.396 EAL: Restoring previous memory policy: 4 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was expanded by 66MB 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was shrunk by 66MB 00:05:35.396 EAL: Trying to obtain current memory policy. 00:05:35.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.396 EAL: Restoring previous memory policy: 4 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was expanded by 130MB 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was shrunk by 130MB 00:05:35.396 EAL: Trying to obtain current memory policy. 00:05:35.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.396 EAL: Restoring previous memory policy: 4 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.396 EAL: request: mp_malloc_sync 00:05:35.396 EAL: No shared files mode enabled, IPC is disabled 00:05:35.396 EAL: Heap on socket 0 was expanded by 258MB 00:05:35.396 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.655 EAL: request: mp_malloc_sync 00:05:35.655 EAL: No shared files mode enabled, IPC is disabled 00:05:35.655 EAL: Heap on socket 0 was shrunk by 258MB 00:05:35.655 EAL: Trying to obtain current memory policy. 00:05:35.655 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.655 EAL: Restoring previous memory policy: 4 00:05:35.655 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.655 EAL: request: mp_malloc_sync 00:05:35.655 EAL: No shared files mode enabled, IPC is disabled 00:05:35.655 EAL: Heap on socket 0 was expanded by 514MB 00:05:35.655 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.914 EAL: request: mp_malloc_sync 00:05:35.914 EAL: No shared files mode enabled, IPC is disabled 00:05:35.914 EAL: Heap on socket 0 was shrunk by 514MB 00:05:35.914 EAL: Trying to obtain current memory policy. 00:05:35.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.914 EAL: Restoring previous memory policy: 4 00:05:35.914 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.914 EAL: request: mp_malloc_sync 00:05:35.914 EAL: No shared files mode enabled, IPC is disabled 00:05:35.914 EAL: Heap on socket 0 was expanded by 1026MB 00:05:36.174 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.433 EAL: request: mp_malloc_sync 00:05:36.433 EAL: No shared files mode enabled, IPC is disabled 00:05:36.433 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:36.433 passed 00:05:36.433 00:05:36.433 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.433 suites 1 1 n/a 0 0 00:05:36.433 tests 2 2 2 0 0 00:05:36.433 asserts 497 497 497 0 n/a 00:05:36.433 00:05:36.433 Elapsed time = 0.969 seconds 00:05:36.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.433 EAL: request: mp_malloc_sync 00:05:36.433 EAL: No shared files mode enabled, IPC is disabled 00:05:36.433 EAL: Heap on socket 0 was shrunk by 2MB 00:05:36.433 EAL: No shared files mode enabled, IPC is disabled 00:05:36.433 EAL: No shared files mode enabled, IPC is disabled 00:05:36.433 EAL: No shared files mode enabled, IPC is disabled 00:05:36.433 00:05:36.433 real 0m1.088s 00:05:36.433 user 0m0.646s 00:05:36.433 sys 0m0.416s 00:05:36.433 07:15:04 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.433 07:15:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:36.433 ************************************ 00:05:36.433 END TEST env_vtophys 00:05:36.433 ************************************ 00:05:36.433 07:15:04 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.433 07:15:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.433 07:15:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.433 07:15:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.433 ************************************ 00:05:36.433 START TEST env_pci 00:05:36.433 ************************************ 00:05:36.433 07:15:04 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:36.433 00:05:36.433 00:05:36.433 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.433 http://cunit.sourceforge.net/ 00:05:36.433 00:05:36.433 00:05:36.433 Suite: pci 00:05:36.433 Test: pci_hook ...[2024-11-26 07:15:04.369414] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 543522 has claimed it 00:05:36.433 EAL: Cannot find device (10000:00:01.0) 00:05:36.433 EAL: Failed to attach device on primary process 00:05:36.433 passed 00:05:36.433 00:05:36.433 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.433 suites 1 1 n/a 0 0 00:05:36.433 tests 1 1 1 0 0 00:05:36.433 asserts 25 25 25 0 n/a 00:05:36.433 00:05:36.433 Elapsed time = 0.026 seconds 00:05:36.433 00:05:36.433 real 0m0.045s 00:05:36.433 user 0m0.016s 00:05:36.433 sys 0m0.029s 00:05:36.433 07:15:04 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.433 07:15:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:36.433 ************************************ 00:05:36.433 END TEST env_pci 00:05:36.433 ************************************ 00:05:36.433 07:15:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:36.433 07:15:04 env -- env/env.sh@15 -- # uname 00:05:36.433 07:15:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:36.433 07:15:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:36.433 07:15:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.433 07:15:04 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:36.433 07:15:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.433 07:15:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.433 ************************************ 00:05:36.433 START TEST env_dpdk_post_init 00:05:36.433 ************************************ 00:05:36.433 07:15:04 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:36.433 EAL: Detected CPU lcores: 96 00:05:36.433 EAL: Detected NUMA nodes: 2 00:05:36.433 EAL: Detected shared linkage of DPDK 00:05:36.433 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.433 EAL: Selected IOVA mode 'VA' 00:05:36.433 EAL: VFIO support initialized 00:05:36.433 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.692 EAL: Using IOMMU type 1 (Type 1) 00:05:36.692 EAL: Ignore mapping IO port bar(1) 00:05:36.692 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:36.692 EAL: Ignore mapping IO port bar(1) 00:05:36.692 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:36.692 EAL: Ignore mapping IO port bar(1) 00:05:36.692 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:36.692 EAL: Ignore mapping IO port bar(1) 00:05:36.692 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:36.692 EAL: Ignore mapping IO port bar(1) 00:05:36.692 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:36.692 EAL: Ignore mapping IO port bar(1) 00:05:36.692 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:36.692 EAL: Ignore mapping IO port bar(1) 00:05:36.692 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:36.692 EAL: Ignore mapping IO port bar(1) 00:05:36.692 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:37.656 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:37.656 EAL: Ignore mapping IO port bar(1) 00:05:37.656 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:37.656 EAL: Ignore mapping IO port bar(1) 00:05:37.656 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:37.656 EAL: Ignore mapping IO port bar(1) 00:05:37.656 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:37.656 EAL: Ignore mapping IO port bar(1) 00:05:37.656 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:37.656 EAL: Ignore mapping IO port bar(1) 00:05:37.656 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:37.656 EAL: Ignore mapping IO port bar(1) 00:05:37.656 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:37.656 EAL: Ignore mapping IO port bar(1) 00:05:37.656 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:37.656 EAL: Ignore mapping IO port bar(1) 00:05:37.656 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:40.946 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:40.946 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:40.946 Starting DPDK initialization... 00:05:40.946 Starting SPDK post initialization... 00:05:40.946 SPDK NVMe probe 00:05:40.946 Attaching to 0000:5e:00.0 00:05:40.946 Attached to 0000:5e:00.0 00:05:40.946 Cleaning up... 00:05:40.946 00:05:40.946 real 0m4.348s 00:05:40.946 user 0m2.987s 00:05:40.946 sys 0m0.433s 00:05:40.946 07:15:08 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.946 07:15:08 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.946 ************************************ 00:05:40.946 END TEST env_dpdk_post_init 00:05:40.946 ************************************ 00:05:40.946 07:15:08 env -- env/env.sh@26 -- # uname 00:05:40.946 07:15:08 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:40.946 07:15:08 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.946 07:15:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.946 07:15:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.946 07:15:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.946 ************************************ 00:05:40.946 START TEST env_mem_callbacks 00:05:40.946 ************************************ 00:05:40.946 07:15:08 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.946 EAL: Detected CPU lcores: 96 00:05:40.946 EAL: Detected NUMA nodes: 2 00:05:40.946 EAL: Detected shared linkage of DPDK 00:05:40.946 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.946 EAL: Selected IOVA mode 'VA' 00:05:40.946 EAL: VFIO support initialized 00:05:40.946 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.946 00:05:40.946 00:05:40.946 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.946 http://cunit.sourceforge.net/ 00:05:40.946 00:05:40.946 00:05:40.946 Suite: memory 00:05:40.946 Test: test ... 00:05:40.946 register 0x200000200000 2097152 00:05:40.946 malloc 3145728 00:05:40.946 register 0x200000400000 4194304 00:05:40.946 buf 0x200000500000 len 3145728 PASSED 00:05:40.946 malloc 64 00:05:40.946 buf 0x2000004fff40 len 64 PASSED 00:05:40.946 malloc 4194304 00:05:40.946 register 0x200000800000 6291456 00:05:40.946 buf 0x200000a00000 len 4194304 PASSED 00:05:40.946 free 0x200000500000 3145728 00:05:40.946 free 0x2000004fff40 64 00:05:40.946 unregister 0x200000400000 4194304 PASSED 00:05:40.946 free 0x200000a00000 4194304 00:05:40.946 unregister 0x200000800000 6291456 PASSED 00:05:40.946 malloc 8388608 00:05:40.946 register 0x200000400000 10485760 00:05:40.946 buf 0x200000600000 len 8388608 PASSED 00:05:40.946 free 0x200000600000 8388608 00:05:40.946 unregister 0x200000400000 10485760 PASSED 00:05:40.946 passed 00:05:40.946 00:05:40.946 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.946 suites 1 1 n/a 0 0 00:05:40.946 tests 1 1 1 0 0 00:05:40.946 asserts 15 15 15 0 n/a 00:05:40.946 00:05:40.946 Elapsed time = 0.005 seconds 00:05:40.946 00:05:40.946 real 0m0.054s 00:05:40.946 user 0m0.019s 00:05:40.946 sys 0m0.035s 00:05:40.946 07:15:08 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.946 07:15:08 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:40.946 ************************************ 00:05:40.946 END TEST env_mem_callbacks 00:05:40.946 ************************************ 00:05:40.946 00:05:40.946 real 0m6.192s 00:05:40.946 user 0m4.039s 00:05:40.946 sys 0m1.229s 00:05:40.946 07:15:08 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.946 07:15:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.946 ************************************ 00:05:40.946 END TEST env 00:05:40.946 ************************************ 00:05:40.946 07:15:08 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:40.946 07:15:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.946 07:15:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.946 07:15:08 -- common/autotest_common.sh@10 -- # set +x 00:05:40.946 ************************************ 00:05:40.946 START TEST rpc 00:05:40.946 ************************************ 00:05:40.946 07:15:09 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:41.206 * Looking for test storage... 00:05:41.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.206 07:15:09 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.206 07:15:09 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.206 07:15:09 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.206 07:15:09 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.206 07:15:09 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.206 07:15:09 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.206 07:15:09 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.206 07:15:09 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.206 07:15:09 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.206 07:15:09 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.206 07:15:09 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.206 07:15:09 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:41.206 07:15:09 rpc -- scripts/common.sh@345 -- # : 1 00:05:41.206 07:15:09 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.206 07:15:09 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.206 07:15:09 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:41.206 07:15:09 rpc -- scripts/common.sh@353 -- # local d=1 00:05:41.206 07:15:09 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.206 07:15:09 rpc -- scripts/common.sh@355 -- # echo 1 00:05:41.206 07:15:09 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.206 07:15:09 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:41.206 07:15:09 rpc -- scripts/common.sh@353 -- # local d=2 00:05:41.206 07:15:09 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.206 07:15:09 rpc -- scripts/common.sh@355 -- # echo 2 00:05:41.206 07:15:09 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.206 07:15:09 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.206 07:15:09 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.206 07:15:09 rpc -- scripts/common.sh@368 -- # return 0 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.206 --rc genhtml_branch_coverage=1 00:05:41.206 --rc genhtml_function_coverage=1 00:05:41.206 --rc genhtml_legend=1 00:05:41.206 --rc geninfo_all_blocks=1 00:05:41.206 --rc geninfo_unexecuted_blocks=1 00:05:41.206 00:05:41.206 ' 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.206 --rc genhtml_branch_coverage=1 00:05:41.206 --rc genhtml_function_coverage=1 00:05:41.206 --rc genhtml_legend=1 00:05:41.206 --rc geninfo_all_blocks=1 00:05:41.206 --rc geninfo_unexecuted_blocks=1 00:05:41.206 00:05:41.206 ' 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.206 --rc genhtml_branch_coverage=1 00:05:41.206 --rc genhtml_function_coverage=1 00:05:41.206 --rc genhtml_legend=1 00:05:41.206 --rc geninfo_all_blocks=1 00:05:41.206 --rc geninfo_unexecuted_blocks=1 00:05:41.206 00:05:41.206 ' 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.206 --rc genhtml_branch_coverage=1 00:05:41.206 --rc genhtml_function_coverage=1 00:05:41.206 --rc genhtml_legend=1 00:05:41.206 --rc geninfo_all_blocks=1 00:05:41.206 --rc geninfo_unexecuted_blocks=1 00:05:41.206 00:05:41.206 ' 00:05:41.206 07:15:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=544616 00:05:41.206 07:15:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.206 07:15:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 544616 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@835 -- # '[' -z 544616 ']' 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.206 07:15:09 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.206 07:15:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.206 [2024-11-26 07:15:09.222433] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:05:41.206 [2024-11-26 07:15:09.222486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544616 ] 00:05:41.206 [2024-11-26 07:15:09.284442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.466 [2024-11-26 07:15:09.327431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.466 [2024-11-26 07:15:09.327468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 544616' to capture a snapshot of events at runtime. 00:05:41.466 [2024-11-26 07:15:09.327475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.466 [2024-11-26 07:15:09.327483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.466 [2024-11-26 07:15:09.327488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid544616 for offline analysis/debug. 00:05:41.466 [2024-11-26 07:15:09.328085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.466 07:15:09 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.466 07:15:09 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:41.466 07:15:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.466 07:15:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:41.466 07:15:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.466 07:15:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.466 07:15:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.466 07:15:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.466 07:15:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.726 ************************************ 00:05:41.726 START TEST rpc_integrity 00:05:41.726 ************************************ 00:05:41.726 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:41.726 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.726 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.726 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.726 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.726 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.726 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.726 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.726 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.726 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.726 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.726 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.726 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.726 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.726 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.726 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.726 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.726 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.726 { 00:05:41.726 "name": "Malloc0", 00:05:41.726 "aliases": [ 00:05:41.726 "7403e691-0f60-415c-b7c5-fc5517eceeae" 00:05:41.726 ], 00:05:41.726 "product_name": "Malloc disk", 00:05:41.726 "block_size": 512, 00:05:41.726 "num_blocks": 16384, 00:05:41.726 "uuid": "7403e691-0f60-415c-b7c5-fc5517eceeae", 00:05:41.726 "assigned_rate_limits": { 00:05:41.726 "rw_ios_per_sec": 0, 00:05:41.727 "rw_mbytes_per_sec": 0, 00:05:41.727 "r_mbytes_per_sec": 0, 00:05:41.727 "w_mbytes_per_sec": 0 00:05:41.727 }, 00:05:41.727 "claimed": false, 00:05:41.727 "zoned": false, 00:05:41.727 "supported_io_types": { 00:05:41.727 "read": true, 00:05:41.727 "write": true, 00:05:41.727 "unmap": true, 00:05:41.727 "flush": true, 00:05:41.727 "reset": true, 00:05:41.727 "nvme_admin": false, 00:05:41.727 "nvme_io": false, 00:05:41.727 "nvme_io_md": false, 00:05:41.727 "write_zeroes": true, 00:05:41.727 "zcopy": true, 00:05:41.727 "get_zone_info": false, 00:05:41.727 "zone_management": false, 00:05:41.727 "zone_append": false, 00:05:41.727 "compare": false, 00:05:41.727 "compare_and_write": false, 00:05:41.727 "abort": true, 00:05:41.727 "seek_hole": false, 00:05:41.727 "seek_data": false, 00:05:41.727 "copy": true, 00:05:41.727 "nvme_iov_md": false 00:05:41.727 }, 00:05:41.727 "memory_domains": [ 00:05:41.727 { 00:05:41.727 "dma_device_id": "system", 00:05:41.727 "dma_device_type": 1 00:05:41.727 }, 00:05:41.727 { 00:05:41.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.727 "dma_device_type": 2 00:05:41.727 } 00:05:41.727 ], 00:05:41.727 "driver_specific": {} 00:05:41.727 } 00:05:41.727 ]' 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.727 [2024-11-26 07:15:09.695324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.727 [2024-11-26 07:15:09.695354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.727 [2024-11-26 07:15:09.695366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x101f6e0 00:05:41.727 [2024-11-26 07:15:09.695373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.727 [2024-11-26 07:15:09.696505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.727 [2024-11-26 07:15:09.696527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.727 Passthru0 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.727 { 00:05:41.727 "name": "Malloc0", 00:05:41.727 "aliases": [ 00:05:41.727 "7403e691-0f60-415c-b7c5-fc5517eceeae" 00:05:41.727 ], 00:05:41.727 "product_name": "Malloc disk", 00:05:41.727 "block_size": 512, 00:05:41.727 "num_blocks": 16384, 00:05:41.727 "uuid": "7403e691-0f60-415c-b7c5-fc5517eceeae", 00:05:41.727 "assigned_rate_limits": { 00:05:41.727 "rw_ios_per_sec": 0, 00:05:41.727 "rw_mbytes_per_sec": 0, 00:05:41.727 "r_mbytes_per_sec": 0, 00:05:41.727 "w_mbytes_per_sec": 0 00:05:41.727 }, 00:05:41.727 "claimed": true, 00:05:41.727 "claim_type": "exclusive_write", 00:05:41.727 "zoned": false, 00:05:41.727 "supported_io_types": { 00:05:41.727 "read": true, 00:05:41.727 "write": true, 00:05:41.727 "unmap": true, 00:05:41.727 "flush": true, 00:05:41.727 "reset": true, 00:05:41.727 "nvme_admin": false, 00:05:41.727 "nvme_io": false, 00:05:41.727 "nvme_io_md": false, 00:05:41.727 "write_zeroes": true, 00:05:41.727 "zcopy": true, 00:05:41.727 "get_zone_info": false, 00:05:41.727 "zone_management": false, 00:05:41.727 "zone_append": false, 00:05:41.727 "compare": false, 00:05:41.727 "compare_and_write": false, 00:05:41.727 "abort": true, 00:05:41.727 "seek_hole": false, 00:05:41.727 "seek_data": false, 00:05:41.727 "copy": true, 00:05:41.727 "nvme_iov_md": false 00:05:41.727 }, 00:05:41.727 "memory_domains": [ 00:05:41.727 { 00:05:41.727 "dma_device_id": "system", 00:05:41.727 "dma_device_type": 1 00:05:41.727 }, 00:05:41.727 { 00:05:41.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.727 "dma_device_type": 2 00:05:41.727 } 00:05:41.727 ], 00:05:41.727 "driver_specific": {} 00:05:41.727 }, 00:05:41.727 { 00:05:41.727 "name": "Passthru0", 00:05:41.727 "aliases": [ 00:05:41.727 "5811fd2c-b022-50da-87b1-9a6357b5531f" 00:05:41.727 ], 00:05:41.727 "product_name": "passthru", 00:05:41.727 "block_size": 512, 00:05:41.727 "num_blocks": 16384, 00:05:41.727 "uuid": "5811fd2c-b022-50da-87b1-9a6357b5531f", 00:05:41.727 "assigned_rate_limits": { 00:05:41.727 "rw_ios_per_sec": 0, 00:05:41.727 "rw_mbytes_per_sec": 0, 00:05:41.727 "r_mbytes_per_sec": 0, 00:05:41.727 "w_mbytes_per_sec": 0 00:05:41.727 }, 00:05:41.727 "claimed": false, 00:05:41.727 "zoned": false, 00:05:41.727 "supported_io_types": { 00:05:41.727 "read": true, 00:05:41.727 "write": true, 00:05:41.727 "unmap": true, 00:05:41.727 "flush": true, 00:05:41.727 "reset": true, 00:05:41.727 "nvme_admin": false, 00:05:41.727 "nvme_io": false, 00:05:41.727 "nvme_io_md": false, 00:05:41.727 "write_zeroes": true, 00:05:41.727 "zcopy": true, 00:05:41.727 "get_zone_info": false, 00:05:41.727 "zone_management": false, 00:05:41.727 "zone_append": false, 00:05:41.727 "compare": false, 00:05:41.727 "compare_and_write": false, 00:05:41.727 "abort": true, 00:05:41.727 "seek_hole": false, 00:05:41.727 "seek_data": false, 00:05:41.727 "copy": true, 00:05:41.727 "nvme_iov_md": false 00:05:41.727 }, 00:05:41.727 "memory_domains": [ 00:05:41.727 { 00:05:41.727 "dma_device_id": "system", 00:05:41.727 "dma_device_type": 1 00:05:41.727 }, 00:05:41.727 { 00:05:41.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.727 "dma_device_type": 2 00:05:41.727 } 00:05:41.727 ], 00:05:41.727 "driver_specific": { 00:05:41.727 "passthru": { 00:05:41.727 "name": "Passthru0", 00:05:41.727 "base_bdev_name": "Malloc0" 00:05:41.727 } 00:05:41.727 } 00:05:41.727 } 00:05:41.727 ]' 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.727 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.727 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.987 07:15:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.987 00:05:41.987 real 0m0.269s 00:05:41.987 user 0m0.171s 00:05:41.987 sys 0m0.040s 00:05:41.987 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.987 07:15:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.987 ************************************ 00:05:41.987 END TEST rpc_integrity 00:05:41.987 ************************************ 00:05:41.987 07:15:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.987 07:15:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.987 07:15:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.987 07:15:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.987 ************************************ 00:05:41.987 START TEST rpc_plugins 00:05:41.987 ************************************ 00:05:41.987 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:41.987 07:15:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.987 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.987 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.987 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.987 07:15:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.987 07:15:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.987 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.988 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.988 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.988 07:15:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.988 { 00:05:41.988 "name": "Malloc1", 00:05:41.988 "aliases": [ 00:05:41.988 "141b02cc-e5a4-4727-a88b-12b807425272" 00:05:41.988 ], 00:05:41.988 "product_name": "Malloc disk", 00:05:41.988 "block_size": 4096, 00:05:41.988 "num_blocks": 256, 00:05:41.988 "uuid": "141b02cc-e5a4-4727-a88b-12b807425272", 00:05:41.988 "assigned_rate_limits": { 00:05:41.988 "rw_ios_per_sec": 0, 00:05:41.988 "rw_mbytes_per_sec": 0, 00:05:41.988 "r_mbytes_per_sec": 0, 00:05:41.988 "w_mbytes_per_sec": 0 00:05:41.988 }, 00:05:41.988 "claimed": false, 00:05:41.988 "zoned": false, 00:05:41.988 "supported_io_types": { 00:05:41.988 "read": true, 00:05:41.988 "write": true, 00:05:41.988 "unmap": true, 00:05:41.988 "flush": true, 00:05:41.988 "reset": true, 00:05:41.988 "nvme_admin": false, 00:05:41.988 "nvme_io": false, 00:05:41.988 "nvme_io_md": false, 00:05:41.988 "write_zeroes": true, 00:05:41.988 "zcopy": true, 00:05:41.988 "get_zone_info": false, 00:05:41.988 "zone_management": false, 00:05:41.988 "zone_append": false, 00:05:41.988 "compare": false, 00:05:41.988 "compare_and_write": false, 00:05:41.988 "abort": true, 00:05:41.988 "seek_hole": false, 00:05:41.988 "seek_data": false, 00:05:41.988 "copy": true, 00:05:41.988 "nvme_iov_md": false 00:05:41.988 }, 00:05:41.988 "memory_domains": [ 00:05:41.988 { 00:05:41.988 "dma_device_id": "system", 00:05:41.988 "dma_device_type": 1 00:05:41.988 }, 00:05:41.988 { 00:05:41.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.988 "dma_device_type": 2 00:05:41.988 } 00:05:41.988 ], 00:05:41.988 "driver_specific": {} 00:05:41.988 } 00:05:41.988 ]' 00:05:41.988 07:15:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:41.988 07:15:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.988 07:15:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.988 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.988 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.988 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.988 07:15:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.988 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.988 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.988 07:15:09 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.988 07:15:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.988 07:15:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:41.988 07:15:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.988 00:05:41.988 real 0m0.134s 00:05:41.988 user 0m0.089s 00:05:41.988 sys 0m0.015s 00:05:41.988 07:15:10 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.988 07:15:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.988 ************************************ 00:05:41.988 END TEST rpc_plugins 00:05:41.988 ************************************ 00:05:41.988 07:15:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:41.988 07:15:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.988 07:15:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.988 07:15:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.248 ************************************ 00:05:42.248 START TEST rpc_trace_cmd_test 00:05:42.248 ************************************ 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:42.248 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid544616", 00:05:42.248 "tpoint_group_mask": "0x8", 00:05:42.248 "iscsi_conn": { 00:05:42.248 "mask": "0x2", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "scsi": { 00:05:42.248 "mask": "0x4", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "bdev": { 00:05:42.248 "mask": "0x8", 00:05:42.248 "tpoint_mask": "0xffffffffffffffff" 00:05:42.248 }, 00:05:42.248 "nvmf_rdma": { 00:05:42.248 "mask": "0x10", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "nvmf_tcp": { 00:05:42.248 "mask": "0x20", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "ftl": { 00:05:42.248 "mask": "0x40", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "blobfs": { 00:05:42.248 "mask": "0x80", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "dsa": { 00:05:42.248 "mask": "0x200", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "thread": { 00:05:42.248 "mask": "0x400", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "nvme_pcie": { 00:05:42.248 "mask": "0x800", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "iaa": { 00:05:42.248 "mask": "0x1000", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "nvme_tcp": { 00:05:42.248 "mask": "0x2000", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "bdev_nvme": { 00:05:42.248 "mask": "0x4000", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "sock": { 00:05:42.248 "mask": "0x8000", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "blob": { 00:05:42.248 "mask": "0x10000", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "bdev_raid": { 00:05:42.248 "mask": "0x20000", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 }, 00:05:42.248 "scheduler": { 00:05:42.248 "mask": "0x40000", 00:05:42.248 "tpoint_mask": "0x0" 00:05:42.248 } 00:05:42.248 }' 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.248 00:05:42.248 real 0m0.221s 00:05:42.248 user 0m0.184s 00:05:42.248 sys 0m0.027s 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.248 07:15:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.248 ************************************ 00:05:42.248 END TEST rpc_trace_cmd_test 00:05:42.248 ************************************ 00:05:42.507 07:15:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.507 07:15:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.507 07:15:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.507 07:15:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.507 07:15:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.507 07:15:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.507 ************************************ 00:05:42.507 START TEST rpc_daemon_integrity 00:05:42.507 ************************************ 00:05:42.507 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:42.507 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.507 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.507 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.507 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.507 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.507 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.508 { 00:05:42.508 "name": "Malloc2", 00:05:42.508 "aliases": [ 00:05:42.508 "c7eda1cb-b640-4437-9285-0a4f1d6352da" 00:05:42.508 ], 00:05:42.508 "product_name": "Malloc disk", 00:05:42.508 "block_size": 512, 00:05:42.508 "num_blocks": 16384, 00:05:42.508 "uuid": "c7eda1cb-b640-4437-9285-0a4f1d6352da", 00:05:42.508 "assigned_rate_limits": { 00:05:42.508 "rw_ios_per_sec": 0, 00:05:42.508 "rw_mbytes_per_sec": 0, 00:05:42.508 "r_mbytes_per_sec": 0, 00:05:42.508 "w_mbytes_per_sec": 0 00:05:42.508 }, 00:05:42.508 "claimed": false, 00:05:42.508 "zoned": false, 00:05:42.508 "supported_io_types": { 00:05:42.508 "read": true, 00:05:42.508 "write": true, 00:05:42.508 "unmap": true, 00:05:42.508 "flush": true, 00:05:42.508 "reset": true, 00:05:42.508 "nvme_admin": false, 00:05:42.508 "nvme_io": false, 00:05:42.508 "nvme_io_md": false, 00:05:42.508 "write_zeroes": true, 00:05:42.508 "zcopy": true, 00:05:42.508 "get_zone_info": false, 00:05:42.508 "zone_management": false, 00:05:42.508 "zone_append": false, 00:05:42.508 "compare": false, 00:05:42.508 "compare_and_write": false, 00:05:42.508 "abort": true, 00:05:42.508 "seek_hole": false, 00:05:42.508 "seek_data": false, 00:05:42.508 "copy": true, 00:05:42.508 "nvme_iov_md": false 00:05:42.508 }, 00:05:42.508 "memory_domains": [ 00:05:42.508 { 00:05:42.508 "dma_device_id": "system", 00:05:42.508 "dma_device_type": 1 00:05:42.508 }, 00:05:42.508 { 00:05:42.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.508 "dma_device_type": 2 00:05:42.508 } 00:05:42.508 ], 00:05:42.508 "driver_specific": {} 00:05:42.508 } 00:05:42.508 ]' 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.508 [2024-11-26 07:15:10.521582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.508 [2024-11-26 07:15:10.521611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.508 [2024-11-26 07:15:10.521622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10afb70 00:05:42.508 [2024-11-26 07:15:10.521628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.508 [2024-11-26 07:15:10.522622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.508 [2024-11-26 07:15:10.522644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.508 Passthru0 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.508 { 00:05:42.508 "name": "Malloc2", 00:05:42.508 "aliases": [ 00:05:42.508 "c7eda1cb-b640-4437-9285-0a4f1d6352da" 00:05:42.508 ], 00:05:42.508 "product_name": "Malloc disk", 00:05:42.508 "block_size": 512, 00:05:42.508 "num_blocks": 16384, 00:05:42.508 "uuid": "c7eda1cb-b640-4437-9285-0a4f1d6352da", 00:05:42.508 "assigned_rate_limits": { 00:05:42.508 "rw_ios_per_sec": 0, 00:05:42.508 "rw_mbytes_per_sec": 0, 00:05:42.508 "r_mbytes_per_sec": 0, 00:05:42.508 "w_mbytes_per_sec": 0 00:05:42.508 }, 00:05:42.508 "claimed": true, 00:05:42.508 "claim_type": "exclusive_write", 00:05:42.508 "zoned": false, 00:05:42.508 "supported_io_types": { 00:05:42.508 "read": true, 00:05:42.508 "write": true, 00:05:42.508 "unmap": true, 00:05:42.508 "flush": true, 00:05:42.508 "reset": true, 00:05:42.508 "nvme_admin": false, 00:05:42.508 "nvme_io": false, 00:05:42.508 "nvme_io_md": false, 00:05:42.508 "write_zeroes": true, 00:05:42.508 "zcopy": true, 00:05:42.508 "get_zone_info": false, 00:05:42.508 "zone_management": false, 00:05:42.508 "zone_append": false, 00:05:42.508 "compare": false, 00:05:42.508 "compare_and_write": false, 00:05:42.508 "abort": true, 00:05:42.508 "seek_hole": false, 00:05:42.508 "seek_data": false, 00:05:42.508 "copy": true, 00:05:42.508 "nvme_iov_md": false 00:05:42.508 }, 00:05:42.508 "memory_domains": [ 00:05:42.508 { 00:05:42.508 "dma_device_id": "system", 00:05:42.508 "dma_device_type": 1 00:05:42.508 }, 00:05:42.508 { 00:05:42.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.508 "dma_device_type": 2 00:05:42.508 } 00:05:42.508 ], 00:05:42.508 "driver_specific": {} 00:05:42.508 }, 00:05:42.508 { 00:05:42.508 "name": "Passthru0", 00:05:42.508 "aliases": [ 00:05:42.508 "7db114c8-ab6c-5221-8f15-d16d1f101cc9" 00:05:42.508 ], 00:05:42.508 "product_name": "passthru", 00:05:42.508 "block_size": 512, 00:05:42.508 "num_blocks": 16384, 00:05:42.508 "uuid": "7db114c8-ab6c-5221-8f15-d16d1f101cc9", 00:05:42.508 "assigned_rate_limits": { 00:05:42.508 "rw_ios_per_sec": 0, 00:05:42.508 "rw_mbytes_per_sec": 0, 00:05:42.508 "r_mbytes_per_sec": 0, 00:05:42.508 "w_mbytes_per_sec": 0 00:05:42.508 }, 00:05:42.508 "claimed": false, 00:05:42.508 "zoned": false, 00:05:42.508 "supported_io_types": { 00:05:42.508 "read": true, 00:05:42.508 "write": true, 00:05:42.508 "unmap": true, 00:05:42.508 "flush": true, 00:05:42.508 "reset": true, 00:05:42.508 "nvme_admin": false, 00:05:42.508 "nvme_io": false, 00:05:42.508 "nvme_io_md": false, 00:05:42.508 "write_zeroes": true, 00:05:42.508 "zcopy": true, 00:05:42.508 "get_zone_info": false, 00:05:42.508 "zone_management": false, 00:05:42.508 "zone_append": false, 00:05:42.508 "compare": false, 00:05:42.508 "compare_and_write": false, 00:05:42.508 "abort": true, 00:05:42.508 "seek_hole": false, 00:05:42.508 "seek_data": false, 00:05:42.508 "copy": true, 00:05:42.508 "nvme_iov_md": false 00:05:42.508 }, 00:05:42.508 "memory_domains": [ 00:05:42.508 { 00:05:42.508 "dma_device_id": "system", 00:05:42.508 "dma_device_type": 1 00:05:42.508 }, 00:05:42.508 { 00:05:42.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.508 "dma_device_type": 2 00:05:42.508 } 00:05:42.508 ], 00:05:42.508 "driver_specific": { 00:05:42.508 "passthru": { 00:05:42.508 "name": "Passthru0", 00:05:42.508 "base_bdev_name": "Malloc2" 00:05:42.508 } 00:05:42.508 } 00:05:42.508 } 00:05:42.508 ]' 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.508 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.768 00:05:42.768 real 0m0.270s 00:05:42.768 user 0m0.168s 00:05:42.768 sys 0m0.041s 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.768 07:15:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.768 ************************************ 00:05:42.768 END TEST rpc_daemon_integrity 00:05:42.768 ************************************ 00:05:42.768 07:15:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.768 07:15:10 rpc -- rpc/rpc.sh@84 -- # killprocess 544616 00:05:42.768 07:15:10 rpc -- common/autotest_common.sh@954 -- # '[' -z 544616 ']' 00:05:42.768 07:15:10 rpc -- common/autotest_common.sh@958 -- # kill -0 544616 00:05:42.768 07:15:10 rpc -- common/autotest_common.sh@959 -- # uname 00:05:42.768 07:15:10 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.768 07:15:10 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 544616 00:05:42.768 07:15:10 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.768 07:15:10 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.768 07:15:10 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 544616' 00:05:42.768 killing process with pid 544616 00:05:42.768 07:15:10 rpc -- common/autotest_common.sh@973 -- # kill 544616 00:05:42.768 07:15:10 rpc -- common/autotest_common.sh@978 -- # wait 544616 00:05:43.027 00:05:43.027 real 0m2.033s 00:05:43.027 user 0m2.617s 00:05:43.027 sys 0m0.680s 00:05:43.027 07:15:11 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.027 07:15:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.027 ************************************ 00:05:43.027 END TEST rpc 00:05:43.027 ************************************ 00:05:43.027 07:15:11 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.027 07:15:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.027 07:15:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.027 07:15:11 -- common/autotest_common.sh@10 -- # set +x 00:05:43.027 ************************************ 00:05:43.027 START TEST skip_rpc 00:05:43.027 ************************************ 00:05:43.027 07:15:11 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:43.286 * Looking for test storage... 00:05:43.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.286 07:15:11 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.287 07:15:11 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.287 07:15:11 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.287 07:15:11 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.287 07:15:11 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:43.287 07:15:11 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.287 07:15:11 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:43.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.287 --rc genhtml_branch_coverage=1 00:05:43.287 --rc genhtml_function_coverage=1 00:05:43.287 --rc genhtml_legend=1 00:05:43.287 --rc geninfo_all_blocks=1 00:05:43.287 --rc geninfo_unexecuted_blocks=1 00:05:43.287 00:05:43.287 ' 00:05:43.287 07:15:11 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:43.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.287 --rc genhtml_branch_coverage=1 00:05:43.287 --rc genhtml_function_coverage=1 00:05:43.287 --rc genhtml_legend=1 00:05:43.287 --rc geninfo_all_blocks=1 00:05:43.287 --rc geninfo_unexecuted_blocks=1 00:05:43.287 00:05:43.287 ' 00:05:43.287 07:15:11 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:43.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.287 --rc genhtml_branch_coverage=1 00:05:43.287 --rc genhtml_function_coverage=1 00:05:43.287 --rc genhtml_legend=1 00:05:43.287 --rc geninfo_all_blocks=1 00:05:43.287 --rc geninfo_unexecuted_blocks=1 00:05:43.287 00:05:43.287 ' 00:05:43.287 07:15:11 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:43.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.287 --rc genhtml_branch_coverage=1 00:05:43.287 --rc genhtml_function_coverage=1 00:05:43.287 --rc genhtml_legend=1 00:05:43.287 --rc geninfo_all_blocks=1 00:05:43.287 --rc geninfo_unexecuted_blocks=1 00:05:43.287 00:05:43.287 ' 00:05:43.287 07:15:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:43.287 07:15:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:43.287 07:15:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:43.287 07:15:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.287 07:15:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.287 07:15:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.287 ************************************ 00:05:43.287 START TEST skip_rpc 00:05:43.287 ************************************ 00:05:43.287 07:15:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:43.287 07:15:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=545400 00:05:43.287 07:15:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.287 07:15:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:43.287 07:15:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:43.287 [2024-11-26 07:15:11.360969] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:05:43.287 [2024-11-26 07:15:11.361011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid545400 ] 00:05:43.546 [2024-11-26 07:15:11.422782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.546 [2024-11-26 07:15:11.462930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 545400 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 545400 ']' 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 545400 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 545400 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 545400' 00:05:48.816 killing process with pid 545400 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 545400 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 545400 00:05:48.816 00:05:48.816 real 0m5.359s 00:05:48.816 user 0m5.126s 00:05:48.816 sys 0m0.267s 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.816 07:15:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.816 ************************************ 00:05:48.816 END TEST skip_rpc 00:05:48.816 ************************************ 00:05:48.816 07:15:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:48.816 07:15:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.816 07:15:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.816 07:15:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.816 ************************************ 00:05:48.816 START TEST skip_rpc_with_json 00:05:48.816 ************************************ 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=546344 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 546344 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 546344 ']' 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.816 07:15:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.816 [2024-11-26 07:15:16.788608] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:05:48.816 [2024-11-26 07:15:16.788652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid546344 ] 00:05:48.816 [2024-11-26 07:15:16.849942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.816 [2024-11-26 07:15:16.892810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.075 [2024-11-26 07:15:17.106400] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:49.075 request: 00:05:49.075 { 00:05:49.075 "trtype": "tcp", 00:05:49.075 "method": "nvmf_get_transports", 00:05:49.075 "req_id": 1 00:05:49.075 } 00:05:49.075 Got JSON-RPC error response 00:05:49.075 response: 00:05:49.075 { 00:05:49.075 "code": -19, 00:05:49.075 "message": "No such device" 00:05:49.075 } 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.075 [2024-11-26 07:15:17.114496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.075 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.334 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.334 07:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:49.334 { 00:05:49.334 "subsystems": [ 00:05:49.334 { 00:05:49.334 "subsystem": "fsdev", 00:05:49.334 "config": [ 00:05:49.334 { 00:05:49.334 "method": "fsdev_set_opts", 00:05:49.334 "params": { 00:05:49.334 "fsdev_io_pool_size": 65535, 00:05:49.334 "fsdev_io_cache_size": 256 00:05:49.334 } 00:05:49.334 } 00:05:49.334 ] 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "subsystem": "vfio_user_target", 00:05:49.334 "config": null 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "subsystem": "keyring", 00:05:49.334 "config": [] 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "subsystem": "iobuf", 00:05:49.334 "config": [ 00:05:49.334 { 00:05:49.334 "method": "iobuf_set_options", 00:05:49.334 "params": { 00:05:49.334 "small_pool_count": 8192, 00:05:49.334 "large_pool_count": 1024, 00:05:49.334 "small_bufsize": 8192, 00:05:49.334 "large_bufsize": 135168, 00:05:49.334 "enable_numa": false 00:05:49.334 } 00:05:49.334 } 00:05:49.334 ] 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "subsystem": "sock", 00:05:49.334 "config": [ 00:05:49.334 { 00:05:49.334 "method": "sock_set_default_impl", 00:05:49.334 "params": { 00:05:49.334 "impl_name": "posix" 00:05:49.334 } 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "method": "sock_impl_set_options", 00:05:49.334 "params": { 00:05:49.334 "impl_name": "ssl", 00:05:49.334 "recv_buf_size": 4096, 00:05:49.334 "send_buf_size": 4096, 00:05:49.334 "enable_recv_pipe": true, 00:05:49.334 "enable_quickack": false, 00:05:49.334 "enable_placement_id": 0, 00:05:49.334 "enable_zerocopy_send_server": true, 00:05:49.334 "enable_zerocopy_send_client": false, 00:05:49.334 "zerocopy_threshold": 0, 00:05:49.334 "tls_version": 0, 00:05:49.334 "enable_ktls": false 00:05:49.334 } 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "method": "sock_impl_set_options", 00:05:49.334 "params": { 00:05:49.334 "impl_name": "posix", 00:05:49.334 "recv_buf_size": 2097152, 00:05:49.334 "send_buf_size": 2097152, 00:05:49.334 "enable_recv_pipe": true, 00:05:49.334 "enable_quickack": false, 00:05:49.334 "enable_placement_id": 0, 00:05:49.334 "enable_zerocopy_send_server": true, 00:05:49.334 "enable_zerocopy_send_client": false, 00:05:49.334 "zerocopy_threshold": 0, 00:05:49.334 "tls_version": 0, 00:05:49.334 "enable_ktls": false 00:05:49.334 } 00:05:49.334 } 00:05:49.334 ] 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "subsystem": "vmd", 00:05:49.334 "config": [] 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "subsystem": "accel", 00:05:49.334 "config": [ 00:05:49.334 { 00:05:49.334 "method": "accel_set_options", 00:05:49.334 "params": { 00:05:49.334 "small_cache_size": 128, 00:05:49.334 "large_cache_size": 16, 00:05:49.334 "task_count": 2048, 00:05:49.334 "sequence_count": 2048, 00:05:49.334 "buf_count": 2048 00:05:49.334 } 00:05:49.334 } 00:05:49.334 ] 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "subsystem": "bdev", 00:05:49.334 "config": [ 00:05:49.334 { 00:05:49.334 "method": "bdev_set_options", 00:05:49.334 "params": { 00:05:49.334 "bdev_io_pool_size": 65535, 00:05:49.334 "bdev_io_cache_size": 256, 00:05:49.334 "bdev_auto_examine": true, 00:05:49.334 "iobuf_small_cache_size": 128, 00:05:49.334 "iobuf_large_cache_size": 16 00:05:49.334 } 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "method": "bdev_raid_set_options", 00:05:49.334 "params": { 00:05:49.334 "process_window_size_kb": 1024, 00:05:49.334 "process_max_bandwidth_mb_sec": 0 00:05:49.334 } 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "method": "bdev_iscsi_set_options", 00:05:49.334 "params": { 00:05:49.334 "timeout_sec": 30 00:05:49.334 } 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "method": "bdev_nvme_set_options", 00:05:49.334 "params": { 00:05:49.334 "action_on_timeout": "none", 00:05:49.334 "timeout_us": 0, 00:05:49.334 "timeout_admin_us": 0, 00:05:49.334 "keep_alive_timeout_ms": 10000, 00:05:49.334 "arbitration_burst": 0, 00:05:49.334 "low_priority_weight": 0, 00:05:49.334 "medium_priority_weight": 0, 00:05:49.334 "high_priority_weight": 0, 00:05:49.334 "nvme_adminq_poll_period_us": 10000, 00:05:49.334 "nvme_ioq_poll_period_us": 0, 00:05:49.334 "io_queue_requests": 0, 00:05:49.334 "delay_cmd_submit": true, 00:05:49.334 "transport_retry_count": 4, 00:05:49.334 "bdev_retry_count": 3, 00:05:49.334 "transport_ack_timeout": 0, 00:05:49.334 "ctrlr_loss_timeout_sec": 0, 00:05:49.334 "reconnect_delay_sec": 0, 00:05:49.334 "fast_io_fail_timeout_sec": 0, 00:05:49.334 "disable_auto_failback": false, 00:05:49.334 "generate_uuids": false, 00:05:49.334 "transport_tos": 0, 00:05:49.334 "nvme_error_stat": false, 00:05:49.334 "rdma_srq_size": 0, 00:05:49.334 "io_path_stat": false, 00:05:49.334 "allow_accel_sequence": false, 00:05:49.334 "rdma_max_cq_size": 0, 00:05:49.334 "rdma_cm_event_timeout_ms": 0, 00:05:49.334 "dhchap_digests": [ 00:05:49.334 "sha256", 00:05:49.334 "sha384", 00:05:49.334 "sha512" 00:05:49.334 ], 00:05:49.334 "dhchap_dhgroups": [ 00:05:49.334 "null", 00:05:49.334 "ffdhe2048", 00:05:49.334 "ffdhe3072", 00:05:49.334 "ffdhe4096", 00:05:49.334 "ffdhe6144", 00:05:49.334 "ffdhe8192" 00:05:49.334 ] 00:05:49.334 } 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "method": "bdev_nvme_set_hotplug", 00:05:49.334 "params": { 00:05:49.334 "period_us": 100000, 00:05:49.334 "enable": false 00:05:49.334 } 00:05:49.334 }, 00:05:49.334 { 00:05:49.334 "method": "bdev_wait_for_examine" 00:05:49.335 } 00:05:49.335 ] 00:05:49.335 }, 00:05:49.335 { 00:05:49.335 "subsystem": "scsi", 00:05:49.335 "config": null 00:05:49.335 }, 00:05:49.335 { 00:05:49.335 "subsystem": "scheduler", 00:05:49.335 "config": [ 00:05:49.335 { 00:05:49.335 "method": "framework_set_scheduler", 00:05:49.335 "params": { 00:05:49.335 "name": "static" 00:05:49.335 } 00:05:49.335 } 00:05:49.335 ] 00:05:49.335 }, 00:05:49.335 { 00:05:49.335 "subsystem": "vhost_scsi", 00:05:49.335 "config": [] 00:05:49.335 }, 00:05:49.335 { 00:05:49.335 "subsystem": "vhost_blk", 00:05:49.335 "config": [] 00:05:49.335 }, 00:05:49.335 { 00:05:49.335 "subsystem": "ublk", 00:05:49.335 "config": [] 00:05:49.335 }, 00:05:49.335 { 00:05:49.335 "subsystem": "nbd", 00:05:49.335 "config": [] 00:05:49.335 }, 00:05:49.335 { 00:05:49.335 "subsystem": "nvmf", 00:05:49.335 "config": [ 00:05:49.335 { 00:05:49.335 "method": "nvmf_set_config", 00:05:49.335 "params": { 00:05:49.335 "discovery_filter": "match_any", 00:05:49.335 "admin_cmd_passthru": { 00:05:49.335 "identify_ctrlr": false 00:05:49.335 }, 00:05:49.335 "dhchap_digests": [ 00:05:49.335 "sha256", 00:05:49.335 "sha384", 00:05:49.335 "sha512" 00:05:49.335 ], 00:05:49.335 "dhchap_dhgroups": [ 00:05:49.335 "null", 00:05:49.335 "ffdhe2048", 00:05:49.335 "ffdhe3072", 00:05:49.335 "ffdhe4096", 00:05:49.335 "ffdhe6144", 00:05:49.335 "ffdhe8192" 00:05:49.335 ] 00:05:49.335 } 00:05:49.335 }, 00:05:49.335 { 00:05:49.335 "method": "nvmf_set_max_subsystems", 00:05:49.335 "params": { 00:05:49.335 "max_subsystems": 1024 00:05:49.335 } 00:05:49.335 }, 00:05:49.335 { 00:05:49.335 "method": "nvmf_set_crdt", 00:05:49.335 "params": { 00:05:49.335 "crdt1": 0, 00:05:49.335 "crdt2": 0, 00:05:49.335 "crdt3": 0 00:05:49.335 } 00:05:49.335 }, 00:05:49.335 { 00:05:49.335 "method": "nvmf_create_transport", 00:05:49.335 "params": { 00:05:49.335 "trtype": "TCP", 00:05:49.335 "max_queue_depth": 128, 00:05:49.335 "max_io_qpairs_per_ctrlr": 127, 00:05:49.335 "in_capsule_data_size": 4096, 00:05:49.335 "max_io_size": 131072, 00:05:49.335 "io_unit_size": 131072, 00:05:49.335 "max_aq_depth": 128, 00:05:49.335 "num_shared_buffers": 511, 00:05:49.335 "buf_cache_size": 4294967295, 00:05:49.335 "dif_insert_or_strip": false, 00:05:49.335 "zcopy": false, 00:05:49.335 "c2h_success": true, 00:05:49.335 "sock_priority": 0, 00:05:49.335 "abort_timeout_sec": 1, 00:05:49.335 "ack_timeout": 0, 00:05:49.335 "data_wr_pool_size": 0 00:05:49.335 } 00:05:49.335 } 00:05:49.335 ] 00:05:49.335 }, 00:05:49.335 { 00:05:49.335 "subsystem": "iscsi", 00:05:49.335 "config": [ 00:05:49.335 { 00:05:49.335 "method": "iscsi_set_options", 00:05:49.335 "params": { 00:05:49.335 "node_base": "iqn.2016-06.io.spdk", 00:05:49.335 "max_sessions": 128, 00:05:49.335 "max_connections_per_session": 2, 00:05:49.335 "max_queue_depth": 64, 00:05:49.335 "default_time2wait": 2, 00:05:49.335 "default_time2retain": 20, 00:05:49.335 "first_burst_length": 8192, 00:05:49.335 "immediate_data": true, 00:05:49.335 "allow_duplicated_isid": false, 00:05:49.335 "error_recovery_level": 0, 00:05:49.335 "nop_timeout": 60, 00:05:49.335 "nop_in_interval": 30, 00:05:49.335 "disable_chap": false, 00:05:49.335 "require_chap": false, 00:05:49.335 "mutual_chap": false, 00:05:49.335 "chap_group": 0, 00:05:49.335 "max_large_datain_per_connection": 64, 00:05:49.335 "max_r2t_per_connection": 4, 00:05:49.335 "pdu_pool_size": 36864, 00:05:49.335 "immediate_data_pool_size": 16384, 00:05:49.335 "data_out_pool_size": 2048 00:05:49.335 } 00:05:49.335 } 00:05:49.335 ] 00:05:49.335 } 00:05:49.335 ] 00:05:49.335 } 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 546344 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 546344 ']' 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 546344 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 546344 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 546344' 00:05:49.335 killing process with pid 546344 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 546344 00:05:49.335 07:15:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 546344 00:05:49.594 07:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=546369 00:05:49.594 07:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:49.594 07:15:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:54.866 07:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 546369 00:05:54.866 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 546369 ']' 00:05:54.866 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 546369 00:05:54.866 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:54.866 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.866 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 546369 00:05:54.866 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.866 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.866 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 546369' 00:05:54.866 killing process with pid 546369 00:05:54.866 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 546369 00:05:54.866 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 546369 00:05:55.124 07:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:55.124 07:15:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:55.124 00:05:55.124 real 0m6.244s 00:05:55.124 user 0m5.941s 00:05:55.124 sys 0m0.567s 00:05:55.125 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.125 07:15:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.125 ************************************ 00:05:55.125 END TEST skip_rpc_with_json 00:05:55.125 ************************************ 00:05:55.125 07:15:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:55.125 07:15:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.125 07:15:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.125 07:15:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.125 ************************************ 00:05:55.125 START TEST skip_rpc_with_delay 00:05:55.125 ************************************ 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.125 [2024-11-26 07:15:23.101612] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.125 00:05:55.125 real 0m0.067s 00:05:55.125 user 0m0.039s 00:05:55.125 sys 0m0.027s 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.125 07:15:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:55.125 ************************************ 00:05:55.125 END TEST skip_rpc_with_delay 00:05:55.125 ************************************ 00:05:55.125 07:15:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:55.125 07:15:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:55.125 07:15:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:55.125 07:15:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.125 07:15:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.125 07:15:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.125 ************************************ 00:05:55.125 START TEST exit_on_failed_rpc_init 00:05:55.125 ************************************ 00:05:55.125 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:55.125 07:15:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=547341 00:05:55.125 07:15:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 547341 00:05:55.125 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 547341 ']' 00:05:55.125 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.125 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.125 07:15:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.125 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.125 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.125 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.384 [2024-11-26 07:15:23.232966] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:05:55.384 [2024-11-26 07:15:23.233011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547341 ] 00:05:55.384 [2024-11-26 07:15:23.294760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.384 [2024-11-26 07:15:23.337530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:55.643 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.643 [2024-11-26 07:15:23.604365] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:05:55.643 [2024-11-26 07:15:23.604414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547563 ] 00:05:55.643 [2024-11-26 07:15:23.666104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.643 [2024-11-26 07:15:23.706963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.643 [2024-11-26 07:15:23.707019] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:55.643 [2024-11-26 07:15:23.707028] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:55.643 [2024-11-26 07:15:23.707036] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 547341 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 547341 ']' 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 547341 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 547341 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 547341' 00:05:55.903 killing process with pid 547341 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 547341 00:05:55.903 07:15:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 547341 00:05:56.164 00:05:56.164 real 0m0.918s 00:05:56.164 user 0m0.990s 00:05:56.164 sys 0m0.355s 00:05:56.164 07:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.164 07:15:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.164 ************************************ 00:05:56.164 END TEST exit_on_failed_rpc_init 00:05:56.164 ************************************ 00:05:56.164 07:15:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:56.164 00:05:56.164 real 0m13.019s 00:05:56.164 user 0m12.283s 00:05:56.164 sys 0m1.485s 00:05:56.164 07:15:24 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.164 07:15:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.164 ************************************ 00:05:56.164 END TEST skip_rpc 00:05:56.164 ************************************ 00:05:56.164 07:15:24 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:56.164 07:15:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.164 07:15:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.164 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:05:56.164 ************************************ 00:05:56.164 START TEST rpc_client 00:05:56.164 ************************************ 00:05:56.164 07:15:24 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:56.425 * Looking for test storage... 00:05:56.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:56.425 07:15:24 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:56.425 07:15:24 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.425 07:15:24 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:56.425 07:15:24 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.425 07:15:24 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:56.425 07:15:24 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.425 07:15:24 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:56.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.425 --rc genhtml_branch_coverage=1 00:05:56.425 --rc genhtml_function_coverage=1 00:05:56.425 --rc genhtml_legend=1 00:05:56.425 --rc geninfo_all_blocks=1 00:05:56.425 --rc geninfo_unexecuted_blocks=1 00:05:56.425 00:05:56.425 ' 00:05:56.425 07:15:24 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:56.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.425 --rc genhtml_branch_coverage=1 00:05:56.425 --rc genhtml_function_coverage=1 00:05:56.425 --rc genhtml_legend=1 00:05:56.425 --rc geninfo_all_blocks=1 00:05:56.425 --rc geninfo_unexecuted_blocks=1 00:05:56.425 00:05:56.425 ' 00:05:56.425 07:15:24 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:56.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.425 --rc genhtml_branch_coverage=1 00:05:56.425 --rc genhtml_function_coverage=1 00:05:56.425 --rc genhtml_legend=1 00:05:56.425 --rc geninfo_all_blocks=1 00:05:56.425 --rc geninfo_unexecuted_blocks=1 00:05:56.425 00:05:56.425 ' 00:05:56.425 07:15:24 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:56.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.425 --rc genhtml_branch_coverage=1 00:05:56.425 --rc genhtml_function_coverage=1 00:05:56.425 --rc genhtml_legend=1 00:05:56.425 --rc geninfo_all_blocks=1 00:05:56.425 --rc geninfo_unexecuted_blocks=1 00:05:56.425 00:05:56.425 ' 00:05:56.425 07:15:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:56.425 OK 00:05:56.425 07:15:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:56.425 00:05:56.425 real 0m0.200s 00:05:56.425 user 0m0.116s 00:05:56.425 sys 0m0.098s 00:05:56.425 07:15:24 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.425 07:15:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:56.425 ************************************ 00:05:56.425 END TEST rpc_client 00:05:56.425 ************************************ 00:05:56.425 07:15:24 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:56.425 07:15:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.425 07:15:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.425 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:05:56.425 ************************************ 00:05:56.425 START TEST json_config 00:05:56.425 ************************************ 00:05:56.425 07:15:24 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:56.686 07:15:24 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:56.686 07:15:24 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.686 07:15:24 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:56.686 07:15:24 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:56.686 07:15:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.686 07:15:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.686 07:15:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.686 07:15:24 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.686 07:15:24 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.686 07:15:24 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.686 07:15:24 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.686 07:15:24 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.686 07:15:24 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.686 07:15:24 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.686 07:15:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.686 07:15:24 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:56.686 07:15:24 json_config -- scripts/common.sh@345 -- # : 1 00:05:56.686 07:15:24 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.686 07:15:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.686 07:15:24 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:56.686 07:15:24 json_config -- scripts/common.sh@353 -- # local d=1 00:05:56.686 07:15:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.686 07:15:24 json_config -- scripts/common.sh@355 -- # echo 1 00:05:56.686 07:15:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.686 07:15:24 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:56.686 07:15:24 json_config -- scripts/common.sh@353 -- # local d=2 00:05:56.686 07:15:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.686 07:15:24 json_config -- scripts/common.sh@355 -- # echo 2 00:05:56.686 07:15:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.686 07:15:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.686 07:15:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.686 07:15:24 json_config -- scripts/common.sh@368 -- # return 0 00:05:56.686 07:15:24 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.686 07:15:24 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:56.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.686 --rc genhtml_branch_coverage=1 00:05:56.686 --rc genhtml_function_coverage=1 00:05:56.686 --rc genhtml_legend=1 00:05:56.686 --rc geninfo_all_blocks=1 00:05:56.686 --rc geninfo_unexecuted_blocks=1 00:05:56.686 00:05:56.686 ' 00:05:56.686 07:15:24 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:56.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.686 --rc genhtml_branch_coverage=1 00:05:56.686 --rc genhtml_function_coverage=1 00:05:56.686 --rc genhtml_legend=1 00:05:56.686 --rc geninfo_all_blocks=1 00:05:56.686 --rc geninfo_unexecuted_blocks=1 00:05:56.686 00:05:56.686 ' 00:05:56.686 07:15:24 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:56.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.686 --rc genhtml_branch_coverage=1 00:05:56.686 --rc genhtml_function_coverage=1 00:05:56.686 --rc genhtml_legend=1 00:05:56.686 --rc geninfo_all_blocks=1 00:05:56.686 --rc geninfo_unexecuted_blocks=1 00:05:56.686 00:05:56.686 ' 00:05:56.686 07:15:24 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:56.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.686 --rc genhtml_branch_coverage=1 00:05:56.686 --rc genhtml_function_coverage=1 00:05:56.686 --rc genhtml_legend=1 00:05:56.686 --rc geninfo_all_blocks=1 00:05:56.686 --rc geninfo_unexecuted_blocks=1 00:05:56.686 00:05:56.686 ' 00:05:56.686 07:15:24 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.686 07:15:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:56.686 07:15:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.686 07:15:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.686 07:15:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.686 07:15:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.686 07:15:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.686 07:15:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.686 07:15:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.686 07:15:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:56.687 07:15:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.687 07:15:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.687 07:15:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.687 07:15:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.687 07:15:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.687 07:15:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.687 07:15:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.687 07:15:24 json_config -- paths/export.sh@5 -- # export PATH 00:05:56.687 07:15:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@51 -- # : 0 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:56.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:56.687 07:15:24 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:56.687 INFO: JSON configuration test init 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:56.687 07:15:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.687 07:15:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:56.687 07:15:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.687 07:15:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.687 07:15:24 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:56.687 07:15:24 json_config -- json_config/common.sh@9 -- # local app=target 00:05:56.687 07:15:24 json_config -- json_config/common.sh@10 -- # shift 00:05:56.687 07:15:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.687 07:15:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.687 07:15:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.687 07:15:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.687 07:15:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.687 07:15:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=547758 00:05:56.687 07:15:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.687 Waiting for target to run... 00:05:56.687 07:15:24 json_config -- json_config/common.sh@25 -- # waitforlisten 547758 /var/tmp/spdk_tgt.sock 00:05:56.687 07:15:24 json_config -- common/autotest_common.sh@835 -- # '[' -z 547758 ']' 00:05:56.687 07:15:24 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.687 07:15:24 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.687 07:15:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:56.687 07:15:24 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.687 07:15:24 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.687 07:15:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.687 [2024-11-26 07:15:24.718082] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:05:56.687 [2024-11-26 07:15:24.718134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547758 ] 00:05:57.255 [2024-11-26 07:15:25.161892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.255 [2024-11-26 07:15:25.214261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.514 07:15:25 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.514 07:15:25 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:57.514 07:15:25 json_config -- json_config/common.sh@26 -- # echo '' 00:05:57.514 00:05:57.514 07:15:25 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:57.514 07:15:25 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:57.514 07:15:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.514 07:15:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.514 07:15:25 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:57.514 07:15:25 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:57.514 07:15:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:57.514 07:15:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.514 07:15:25 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:57.514 07:15:25 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:57.514 07:15:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:00.804 07:15:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.804 07:15:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:00.804 07:15:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@54 -- # sort 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:00.804 07:15:28 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:00.804 07:15:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.804 07:15:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.063 07:15:28 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:01.063 07:15:28 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:01.063 07:15:28 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:01.063 07:15:28 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:01.063 07:15:28 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:01.063 07:15:28 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:01.063 07:15:28 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:01.063 07:15:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.063 07:15:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.063 07:15:28 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:01.063 07:15:28 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:01.063 07:15:28 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:01.063 07:15:28 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.063 07:15:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.063 MallocForNvmf0 00:06:01.063 07:15:29 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:01.063 07:15:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:01.321 MallocForNvmf1 00:06:01.321 07:15:29 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:01.321 07:15:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:01.579 [2024-11-26 07:15:29.474738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.579 07:15:29 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:01.579 07:15:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:01.579 07:15:29 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:01.579 07:15:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:01.836 07:15:29 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:01.837 07:15:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:02.097 07:15:30 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:02.097 07:15:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:02.355 [2024-11-26 07:15:30.217108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:02.355 07:15:30 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:02.355 07:15:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.355 07:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.355 07:15:30 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:02.355 07:15:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.355 07:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.355 07:15:30 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:02.355 07:15:30 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:02.355 07:15:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:02.614 MallocBdevForConfigChangeCheck 00:06:02.614 07:15:30 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:02.614 07:15:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.614 07:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.614 07:15:30 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:02.614 07:15:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.872 07:15:30 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:02.872 INFO: shutting down applications... 00:06:02.872 07:15:30 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:02.872 07:15:30 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:02.872 07:15:30 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:02.872 07:15:30 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:04.776 Calling clear_iscsi_subsystem 00:06:04.776 Calling clear_nvmf_subsystem 00:06:04.776 Calling clear_nbd_subsystem 00:06:04.776 Calling clear_ublk_subsystem 00:06:04.776 Calling clear_vhost_blk_subsystem 00:06:04.776 Calling clear_vhost_scsi_subsystem 00:06:04.776 Calling clear_bdev_subsystem 00:06:04.776 07:15:32 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:04.776 07:15:32 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:04.776 07:15:32 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:04.776 07:15:32 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.776 07:15:32 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:04.776 07:15:32 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:04.776 07:15:32 json_config -- json_config/json_config.sh@352 -- # break 00:06:04.776 07:15:32 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:04.776 07:15:32 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:04.776 07:15:32 json_config -- json_config/common.sh@31 -- # local app=target 00:06:04.776 07:15:32 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:04.776 07:15:32 json_config -- json_config/common.sh@35 -- # [[ -n 547758 ]] 00:06:04.776 07:15:32 json_config -- json_config/common.sh@38 -- # kill -SIGINT 547758 00:06:04.776 07:15:32 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:04.776 07:15:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.776 07:15:32 json_config -- json_config/common.sh@41 -- # kill -0 547758 00:06:04.776 07:15:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:05.345 07:15:33 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:05.345 07:15:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.345 07:15:33 json_config -- json_config/common.sh@41 -- # kill -0 547758 00:06:05.345 07:15:33 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:05.345 07:15:33 json_config -- json_config/common.sh@43 -- # break 00:06:05.345 07:15:33 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:05.345 07:15:33 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:05.345 SPDK target shutdown done 00:06:05.345 07:15:33 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:05.345 INFO: relaunching applications... 00:06:05.345 07:15:33 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.345 07:15:33 json_config -- json_config/common.sh@9 -- # local app=target 00:06:05.345 07:15:33 json_config -- json_config/common.sh@10 -- # shift 00:06:05.345 07:15:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:05.345 07:15:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:05.345 07:15:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:05.345 07:15:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.345 07:15:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.345 07:15:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=549430 00:06:05.345 07:15:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:05.345 Waiting for target to run... 00:06:05.345 07:15:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.345 07:15:33 json_config -- json_config/common.sh@25 -- # waitforlisten 549430 /var/tmp/spdk_tgt.sock 00:06:05.345 07:15:33 json_config -- common/autotest_common.sh@835 -- # '[' -z 549430 ']' 00:06:05.345 07:15:33 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.345 07:15:33 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.345 07:15:33 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.345 07:15:33 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.345 07:15:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.345 [2024-11-26 07:15:33.376121] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:05.345 [2024-11-26 07:15:33.376180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid549430 ] 00:06:05.914 [2024-11-26 07:15:33.818016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.914 [2024-11-26 07:15:33.873541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.202 [2024-11-26 07:15:36.908729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.202 [2024-11-26 07:15:36.941089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.770 07:15:37 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.770 07:15:37 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:09.770 07:15:37 json_config -- json_config/common.sh@26 -- # echo '' 00:06:09.770 00:06:09.770 07:15:37 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:09.770 07:15:37 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:09.770 INFO: Checking if target configuration is the same... 00:06:09.770 07:15:37 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.770 07:15:37 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:09.770 07:15:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.770 + '[' 2 -ne 2 ']' 00:06:09.770 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:09.770 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:09.770 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:09.770 +++ basename /dev/fd/62 00:06:09.770 ++ mktemp /tmp/62.XXX 00:06:09.770 + tmp_file_1=/tmp/62.Id2 00:06:09.770 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.770 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:09.770 + tmp_file_2=/tmp/spdk_tgt_config.json.4oX 00:06:09.770 + ret=0 00:06:09.770 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.030 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.030 + diff -u /tmp/62.Id2 /tmp/spdk_tgt_config.json.4oX 00:06:10.030 + echo 'INFO: JSON config files are the same' 00:06:10.030 INFO: JSON config files are the same 00:06:10.030 + rm /tmp/62.Id2 /tmp/spdk_tgt_config.json.4oX 00:06:10.030 + exit 0 00:06:10.030 07:15:37 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:10.030 07:15:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:10.030 INFO: changing configuration and checking if this can be detected... 00:06:10.030 07:15:37 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.030 07:15:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.289 07:15:38 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.289 07:15:38 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:10.289 07:15:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.289 + '[' 2 -ne 2 ']' 00:06:10.289 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:10.289 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:10.289 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:10.289 +++ basename /dev/fd/62 00:06:10.289 ++ mktemp /tmp/62.XXX 00:06:10.289 + tmp_file_1=/tmp/62.QES 00:06:10.289 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.289 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.289 + tmp_file_2=/tmp/spdk_tgt_config.json.oPe 00:06:10.289 + ret=0 00:06:10.289 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.548 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.548 + diff -u /tmp/62.QES /tmp/spdk_tgt_config.json.oPe 00:06:10.548 + ret=1 00:06:10.548 + echo '=== Start of file: /tmp/62.QES ===' 00:06:10.548 + cat /tmp/62.QES 00:06:10.548 + echo '=== End of file: /tmp/62.QES ===' 00:06:10.548 + echo '' 00:06:10.548 + echo '=== Start of file: /tmp/spdk_tgt_config.json.oPe ===' 00:06:10.548 + cat /tmp/spdk_tgt_config.json.oPe 00:06:10.548 + echo '=== End of file: /tmp/spdk_tgt_config.json.oPe ===' 00:06:10.548 + echo '' 00:06:10.548 + rm /tmp/62.QES /tmp/spdk_tgt_config.json.oPe 00:06:10.548 + exit 1 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:10.548 INFO: configuration change detected. 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:10.548 07:15:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.548 07:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@324 -- # [[ -n 549430 ]] 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:10.548 07:15:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.548 07:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:10.548 07:15:38 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:10.549 07:15:38 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:10.549 07:15:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.549 07:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.549 07:15:38 json_config -- json_config/json_config.sh@330 -- # killprocess 549430 00:06:10.549 07:15:38 json_config -- common/autotest_common.sh@954 -- # '[' -z 549430 ']' 00:06:10.549 07:15:38 json_config -- common/autotest_common.sh@958 -- # kill -0 549430 00:06:10.549 07:15:38 json_config -- common/autotest_common.sh@959 -- # uname 00:06:10.549 07:15:38 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.549 07:15:38 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 549430 00:06:10.808 07:15:38 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.808 07:15:38 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.808 07:15:38 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 549430' 00:06:10.808 killing process with pid 549430 00:06:10.808 07:15:38 json_config -- common/autotest_common.sh@973 -- # kill 549430 00:06:10.808 07:15:38 json_config -- common/autotest_common.sh@978 -- # wait 549430 00:06:12.186 07:15:40 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.186 07:15:40 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:12.186 07:15:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.186 07:15:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.186 07:15:40 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:12.186 07:15:40 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:12.186 INFO: Success 00:06:12.186 00:06:12.186 real 0m15.733s 00:06:12.186 user 0m16.057s 00:06:12.186 sys 0m2.683s 00:06:12.186 07:15:40 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.186 07:15:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.186 ************************************ 00:06:12.186 END TEST json_config 00:06:12.186 ************************************ 00:06:12.186 07:15:40 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:12.186 07:15:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.186 07:15:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.186 07:15:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.186 ************************************ 00:06:12.186 START TEST json_config_extra_key 00:06:12.186 ************************************ 00:06:12.186 07:15:40 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:12.447 07:15:40 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.447 07:15:40 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.447 07:15:40 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.447 07:15:40 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:12.447 07:15:40 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.447 07:15:40 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.447 --rc genhtml_branch_coverage=1 00:06:12.447 --rc genhtml_function_coverage=1 00:06:12.447 --rc genhtml_legend=1 00:06:12.447 --rc geninfo_all_blocks=1 00:06:12.447 --rc geninfo_unexecuted_blocks=1 00:06:12.447 00:06:12.447 ' 00:06:12.447 07:15:40 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.447 --rc genhtml_branch_coverage=1 00:06:12.447 --rc genhtml_function_coverage=1 00:06:12.447 --rc genhtml_legend=1 00:06:12.447 --rc geninfo_all_blocks=1 00:06:12.447 --rc geninfo_unexecuted_blocks=1 00:06:12.447 00:06:12.447 ' 00:06:12.447 07:15:40 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.447 --rc genhtml_branch_coverage=1 00:06:12.447 --rc genhtml_function_coverage=1 00:06:12.447 --rc genhtml_legend=1 00:06:12.447 --rc geninfo_all_blocks=1 00:06:12.447 --rc geninfo_unexecuted_blocks=1 00:06:12.447 00:06:12.447 ' 00:06:12.447 07:15:40 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.447 --rc genhtml_branch_coverage=1 00:06:12.447 --rc genhtml_function_coverage=1 00:06:12.447 --rc genhtml_legend=1 00:06:12.447 --rc geninfo_all_blocks=1 00:06:12.447 --rc geninfo_unexecuted_blocks=1 00:06:12.447 00:06:12.447 ' 00:06:12.447 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.447 07:15:40 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.447 07:15:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.447 07:15:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.447 07:15:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.447 07:15:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:12.447 07:15:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.447 07:15:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.448 07:15:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.448 07:15:40 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.448 07:15:40 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.448 07:15:40 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:12.448 INFO: launching applications... 00:06:12.448 07:15:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:12.448 07:15:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:12.448 07:15:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:12.448 07:15:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:12.448 07:15:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:12.448 07:15:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:12.448 07:15:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.448 07:15:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:12.448 07:15:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=550711 00:06:12.448 07:15:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:12.448 Waiting for target to run... 00:06:12.448 07:15:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 550711 /var/tmp/spdk_tgt.sock 00:06:12.448 07:15:40 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 550711 ']' 00:06:12.448 07:15:40 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:12.448 07:15:40 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.448 07:15:40 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.448 07:15:40 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.448 07:15:40 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.448 07:15:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.448 [2024-11-26 07:15:40.506468] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:12.448 [2024-11-26 07:15:40.506512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid550711 ] 00:06:13.016 [2024-11-26 07:15:40.950013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.016 [2024-11-26 07:15:41.002806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.276 07:15:41 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.276 07:15:41 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:13.276 07:15:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:13.276 00:06:13.276 07:15:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:13.276 INFO: shutting down applications... 00:06:13.276 07:15:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:13.276 07:15:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:13.276 07:15:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:13.276 07:15:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 550711 ]] 00:06:13.276 07:15:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 550711 00:06:13.276 07:15:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:13.276 07:15:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.276 07:15:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 550711 00:06:13.276 07:15:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.845 07:15:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.845 07:15:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.845 07:15:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 550711 00:06:13.845 07:15:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:13.845 07:15:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:13.845 07:15:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:13.845 07:15:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:13.845 SPDK target shutdown done 00:06:13.845 07:15:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:13.845 Success 00:06:13.845 00:06:13.845 real 0m1.573s 00:06:13.845 user 0m1.192s 00:06:13.845 sys 0m0.564s 00:06:13.845 07:15:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.845 07:15:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.845 ************************************ 00:06:13.845 END TEST json_config_extra_key 00:06:13.845 ************************************ 00:06:13.845 07:15:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.845 07:15:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.845 07:15:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.845 07:15:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.845 ************************************ 00:06:13.845 START TEST alias_rpc 00:06:13.845 ************************************ 00:06:13.845 07:15:41 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:14.104 * Looking for test storage... 00:06:14.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:14.104 07:15:42 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.104 07:15:42 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.104 07:15:42 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.104 07:15:42 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.104 07:15:42 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.104 07:15:42 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.104 07:15:42 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.104 07:15:42 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.104 07:15:42 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.104 07:15:42 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.104 07:15:42 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.104 07:15:42 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.104 07:15:42 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.104 07:15:42 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.104 07:15:42 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.105 07:15:42 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:14.105 07:15:42 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.105 07:15:42 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.105 --rc genhtml_branch_coverage=1 00:06:14.105 --rc genhtml_function_coverage=1 00:06:14.105 --rc genhtml_legend=1 00:06:14.105 --rc geninfo_all_blocks=1 00:06:14.105 --rc geninfo_unexecuted_blocks=1 00:06:14.105 00:06:14.105 ' 00:06:14.105 07:15:42 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.105 --rc genhtml_branch_coverage=1 00:06:14.105 --rc genhtml_function_coverage=1 00:06:14.105 --rc genhtml_legend=1 00:06:14.105 --rc geninfo_all_blocks=1 00:06:14.105 --rc geninfo_unexecuted_blocks=1 00:06:14.105 00:06:14.105 ' 00:06:14.105 07:15:42 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.105 --rc genhtml_branch_coverage=1 00:06:14.105 --rc genhtml_function_coverage=1 00:06:14.105 --rc genhtml_legend=1 00:06:14.105 --rc geninfo_all_blocks=1 00:06:14.105 --rc geninfo_unexecuted_blocks=1 00:06:14.105 00:06:14.105 ' 00:06:14.105 07:15:42 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.105 --rc genhtml_branch_coverage=1 00:06:14.105 --rc genhtml_function_coverage=1 00:06:14.105 --rc genhtml_legend=1 00:06:14.105 --rc geninfo_all_blocks=1 00:06:14.105 --rc geninfo_unexecuted_blocks=1 00:06:14.105 00:06:14.105 ' 00:06:14.105 07:15:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:14.105 07:15:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=551002 00:06:14.105 07:15:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 551002 00:06:14.105 07:15:42 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 551002 ']' 00:06:14.105 07:15:42 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.105 07:15:42 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.105 07:15:42 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.105 07:15:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.105 07:15:42 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.105 07:15:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.105 [2024-11-26 07:15:42.139556] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:14.105 [2024-11-26 07:15:42.139606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551002 ] 00:06:14.364 [2024-11-26 07:15:42.202339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.364 [2024-11-26 07:15:42.245052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.623 07:15:42 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.623 07:15:42 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.623 07:15:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:14.623 07:15:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 551002 00:06:14.623 07:15:42 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 551002 ']' 00:06:14.623 07:15:42 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 551002 00:06:14.623 07:15:42 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:14.623 07:15:42 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.623 07:15:42 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551002 00:06:14.882 07:15:42 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.882 07:15:42 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.882 07:15:42 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551002' 00:06:14.882 killing process with pid 551002 00:06:14.882 07:15:42 alias_rpc -- common/autotest_common.sh@973 -- # kill 551002 00:06:14.882 07:15:42 alias_rpc -- common/autotest_common.sh@978 -- # wait 551002 00:06:15.141 00:06:15.141 real 0m1.101s 00:06:15.141 user 0m1.137s 00:06:15.141 sys 0m0.374s 00:06:15.141 07:15:43 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.141 07:15:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.141 ************************************ 00:06:15.141 END TEST alias_rpc 00:06:15.141 ************************************ 00:06:15.141 07:15:43 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:15.141 07:15:43 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:15.141 07:15:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.141 07:15:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.141 07:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:15.141 ************************************ 00:06:15.141 START TEST spdkcli_tcp 00:06:15.141 ************************************ 00:06:15.141 07:15:43 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:15.141 * Looking for test storage... 00:06:15.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:15.141 07:15:43 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.141 07:15:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.141 07:15:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.141 07:15:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.141 07:15:43 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.141 07:15:43 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.141 07:15:43 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.141 07:15:43 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.142 07:15:43 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.402 07:15:43 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.402 --rc genhtml_branch_coverage=1 00:06:15.402 --rc genhtml_function_coverage=1 00:06:15.402 --rc genhtml_legend=1 00:06:15.402 --rc geninfo_all_blocks=1 00:06:15.402 --rc geninfo_unexecuted_blocks=1 00:06:15.402 00:06:15.402 ' 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.402 --rc genhtml_branch_coverage=1 00:06:15.402 --rc genhtml_function_coverage=1 00:06:15.402 --rc genhtml_legend=1 00:06:15.402 --rc geninfo_all_blocks=1 00:06:15.402 --rc geninfo_unexecuted_blocks=1 00:06:15.402 00:06:15.402 ' 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.402 --rc genhtml_branch_coverage=1 00:06:15.402 --rc genhtml_function_coverage=1 00:06:15.402 --rc genhtml_legend=1 00:06:15.402 --rc geninfo_all_blocks=1 00:06:15.402 --rc geninfo_unexecuted_blocks=1 00:06:15.402 00:06:15.402 ' 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.402 --rc genhtml_branch_coverage=1 00:06:15.402 --rc genhtml_function_coverage=1 00:06:15.402 --rc genhtml_legend=1 00:06:15.402 --rc geninfo_all_blocks=1 00:06:15.402 --rc geninfo_unexecuted_blocks=1 00:06:15.402 00:06:15.402 ' 00:06:15.402 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:15.402 07:15:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:15.402 07:15:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:15.402 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:15.402 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:15.402 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:15.402 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.402 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=551290 00:06:15.402 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 551290 00:06:15.402 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 551290 ']' 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.402 07:15:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.402 [2024-11-26 07:15:43.309393] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:15.402 [2024-11-26 07:15:43.309438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551290 ] 00:06:15.402 [2024-11-26 07:15:43.370870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.402 [2024-11-26 07:15:43.412677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.402 [2024-11-26 07:15:43.412680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.662 07:15:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.662 07:15:43 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:15.662 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=551306 00:06:15.662 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:15.662 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.921 [ 00:06:15.922 "bdev_malloc_delete", 00:06:15.922 "bdev_malloc_create", 00:06:15.922 "bdev_null_resize", 00:06:15.922 "bdev_null_delete", 00:06:15.922 "bdev_null_create", 00:06:15.922 "bdev_nvme_cuse_unregister", 00:06:15.922 "bdev_nvme_cuse_register", 00:06:15.922 "bdev_opal_new_user", 00:06:15.922 "bdev_opal_set_lock_state", 00:06:15.922 "bdev_opal_delete", 00:06:15.922 "bdev_opal_get_info", 00:06:15.922 "bdev_opal_create", 00:06:15.922 "bdev_nvme_opal_revert", 00:06:15.922 "bdev_nvme_opal_init", 00:06:15.922 "bdev_nvme_send_cmd", 00:06:15.922 "bdev_nvme_set_keys", 00:06:15.922 "bdev_nvme_get_path_iostat", 00:06:15.922 "bdev_nvme_get_mdns_discovery_info", 00:06:15.922 "bdev_nvme_stop_mdns_discovery", 00:06:15.922 "bdev_nvme_start_mdns_discovery", 00:06:15.922 "bdev_nvme_set_multipath_policy", 00:06:15.922 "bdev_nvme_set_preferred_path", 00:06:15.922 "bdev_nvme_get_io_paths", 00:06:15.922 "bdev_nvme_remove_error_injection", 00:06:15.922 "bdev_nvme_add_error_injection", 00:06:15.922 "bdev_nvme_get_discovery_info", 00:06:15.922 "bdev_nvme_stop_discovery", 00:06:15.922 "bdev_nvme_start_discovery", 00:06:15.922 "bdev_nvme_get_controller_health_info", 00:06:15.922 "bdev_nvme_disable_controller", 00:06:15.922 "bdev_nvme_enable_controller", 00:06:15.922 "bdev_nvme_reset_controller", 00:06:15.922 "bdev_nvme_get_transport_statistics", 00:06:15.922 "bdev_nvme_apply_firmware", 00:06:15.922 "bdev_nvme_detach_controller", 00:06:15.922 "bdev_nvme_get_controllers", 00:06:15.922 "bdev_nvme_attach_controller", 00:06:15.922 "bdev_nvme_set_hotplug", 00:06:15.922 "bdev_nvme_set_options", 00:06:15.922 "bdev_passthru_delete", 00:06:15.922 "bdev_passthru_create", 00:06:15.922 "bdev_lvol_set_parent_bdev", 00:06:15.922 "bdev_lvol_set_parent", 00:06:15.922 "bdev_lvol_check_shallow_copy", 00:06:15.922 "bdev_lvol_start_shallow_copy", 00:06:15.922 "bdev_lvol_grow_lvstore", 00:06:15.922 "bdev_lvol_get_lvols", 00:06:15.922 "bdev_lvol_get_lvstores", 00:06:15.922 "bdev_lvol_delete", 00:06:15.922 "bdev_lvol_set_read_only", 00:06:15.922 "bdev_lvol_resize", 00:06:15.922 "bdev_lvol_decouple_parent", 00:06:15.922 "bdev_lvol_inflate", 00:06:15.922 "bdev_lvol_rename", 00:06:15.922 "bdev_lvol_clone_bdev", 00:06:15.922 "bdev_lvol_clone", 00:06:15.922 "bdev_lvol_snapshot", 00:06:15.922 "bdev_lvol_create", 00:06:15.922 "bdev_lvol_delete_lvstore", 00:06:15.922 "bdev_lvol_rename_lvstore", 00:06:15.922 "bdev_lvol_create_lvstore", 00:06:15.922 "bdev_raid_set_options", 00:06:15.922 "bdev_raid_remove_base_bdev", 00:06:15.922 "bdev_raid_add_base_bdev", 00:06:15.922 "bdev_raid_delete", 00:06:15.922 "bdev_raid_create", 00:06:15.922 "bdev_raid_get_bdevs", 00:06:15.922 "bdev_error_inject_error", 00:06:15.922 "bdev_error_delete", 00:06:15.922 "bdev_error_create", 00:06:15.922 "bdev_split_delete", 00:06:15.922 "bdev_split_create", 00:06:15.922 "bdev_delay_delete", 00:06:15.922 "bdev_delay_create", 00:06:15.922 "bdev_delay_update_latency", 00:06:15.922 "bdev_zone_block_delete", 00:06:15.922 "bdev_zone_block_create", 00:06:15.922 "blobfs_create", 00:06:15.922 "blobfs_detect", 00:06:15.922 "blobfs_set_cache_size", 00:06:15.922 "bdev_aio_delete", 00:06:15.922 "bdev_aio_rescan", 00:06:15.922 "bdev_aio_create", 00:06:15.922 "bdev_ftl_set_property", 00:06:15.922 "bdev_ftl_get_properties", 00:06:15.922 "bdev_ftl_get_stats", 00:06:15.922 "bdev_ftl_unmap", 00:06:15.922 "bdev_ftl_unload", 00:06:15.922 "bdev_ftl_delete", 00:06:15.922 "bdev_ftl_load", 00:06:15.922 "bdev_ftl_create", 00:06:15.922 "bdev_virtio_attach_controller", 00:06:15.922 "bdev_virtio_scsi_get_devices", 00:06:15.922 "bdev_virtio_detach_controller", 00:06:15.922 "bdev_virtio_blk_set_hotplug", 00:06:15.922 "bdev_iscsi_delete", 00:06:15.922 "bdev_iscsi_create", 00:06:15.922 "bdev_iscsi_set_options", 00:06:15.922 "accel_error_inject_error", 00:06:15.922 "ioat_scan_accel_module", 00:06:15.922 "dsa_scan_accel_module", 00:06:15.922 "iaa_scan_accel_module", 00:06:15.922 "vfu_virtio_create_fs_endpoint", 00:06:15.922 "vfu_virtio_create_scsi_endpoint", 00:06:15.922 "vfu_virtio_scsi_remove_target", 00:06:15.922 "vfu_virtio_scsi_add_target", 00:06:15.922 "vfu_virtio_create_blk_endpoint", 00:06:15.922 "vfu_virtio_delete_endpoint", 00:06:15.922 "keyring_file_remove_key", 00:06:15.922 "keyring_file_add_key", 00:06:15.922 "keyring_linux_set_options", 00:06:15.922 "fsdev_aio_delete", 00:06:15.922 "fsdev_aio_create", 00:06:15.922 "iscsi_get_histogram", 00:06:15.922 "iscsi_enable_histogram", 00:06:15.922 "iscsi_set_options", 00:06:15.922 "iscsi_get_auth_groups", 00:06:15.922 "iscsi_auth_group_remove_secret", 00:06:15.922 "iscsi_auth_group_add_secret", 00:06:15.922 "iscsi_delete_auth_group", 00:06:15.922 "iscsi_create_auth_group", 00:06:15.922 "iscsi_set_discovery_auth", 00:06:15.922 "iscsi_get_options", 00:06:15.922 "iscsi_target_node_request_logout", 00:06:15.922 "iscsi_target_node_set_redirect", 00:06:15.922 "iscsi_target_node_set_auth", 00:06:15.922 "iscsi_target_node_add_lun", 00:06:15.922 "iscsi_get_stats", 00:06:15.922 "iscsi_get_connections", 00:06:15.922 "iscsi_portal_group_set_auth", 00:06:15.922 "iscsi_start_portal_group", 00:06:15.922 "iscsi_delete_portal_group", 00:06:15.922 "iscsi_create_portal_group", 00:06:15.922 "iscsi_get_portal_groups", 00:06:15.922 "iscsi_delete_target_node", 00:06:15.922 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.922 "iscsi_target_node_add_pg_ig_maps", 00:06:15.922 "iscsi_create_target_node", 00:06:15.922 "iscsi_get_target_nodes", 00:06:15.922 "iscsi_delete_initiator_group", 00:06:15.922 "iscsi_initiator_group_remove_initiators", 00:06:15.922 "iscsi_initiator_group_add_initiators", 00:06:15.922 "iscsi_create_initiator_group", 00:06:15.922 "iscsi_get_initiator_groups", 00:06:15.922 "nvmf_set_crdt", 00:06:15.922 "nvmf_set_config", 00:06:15.922 "nvmf_set_max_subsystems", 00:06:15.922 "nvmf_stop_mdns_prr", 00:06:15.922 "nvmf_publish_mdns_prr", 00:06:15.922 "nvmf_subsystem_get_listeners", 00:06:15.922 "nvmf_subsystem_get_qpairs", 00:06:15.922 "nvmf_subsystem_get_controllers", 00:06:15.922 "nvmf_get_stats", 00:06:15.922 "nvmf_get_transports", 00:06:15.922 "nvmf_create_transport", 00:06:15.922 "nvmf_get_targets", 00:06:15.922 "nvmf_delete_target", 00:06:15.922 "nvmf_create_target", 00:06:15.922 "nvmf_subsystem_allow_any_host", 00:06:15.922 "nvmf_subsystem_set_keys", 00:06:15.922 "nvmf_subsystem_remove_host", 00:06:15.922 "nvmf_subsystem_add_host", 00:06:15.922 "nvmf_ns_remove_host", 00:06:15.922 "nvmf_ns_add_host", 00:06:15.922 "nvmf_subsystem_remove_ns", 00:06:15.922 "nvmf_subsystem_set_ns_ana_group", 00:06:15.922 "nvmf_subsystem_add_ns", 00:06:15.922 "nvmf_subsystem_listener_set_ana_state", 00:06:15.922 "nvmf_discovery_get_referrals", 00:06:15.922 "nvmf_discovery_remove_referral", 00:06:15.922 "nvmf_discovery_add_referral", 00:06:15.922 "nvmf_subsystem_remove_listener", 00:06:15.922 "nvmf_subsystem_add_listener", 00:06:15.922 "nvmf_delete_subsystem", 00:06:15.922 "nvmf_create_subsystem", 00:06:15.922 "nvmf_get_subsystems", 00:06:15.922 "env_dpdk_get_mem_stats", 00:06:15.922 "nbd_get_disks", 00:06:15.922 "nbd_stop_disk", 00:06:15.922 "nbd_start_disk", 00:06:15.922 "ublk_recover_disk", 00:06:15.922 "ublk_get_disks", 00:06:15.922 "ublk_stop_disk", 00:06:15.922 "ublk_start_disk", 00:06:15.922 "ublk_destroy_target", 00:06:15.922 "ublk_create_target", 00:06:15.922 "virtio_blk_create_transport", 00:06:15.922 "virtio_blk_get_transports", 00:06:15.922 "vhost_controller_set_coalescing", 00:06:15.922 "vhost_get_controllers", 00:06:15.922 "vhost_delete_controller", 00:06:15.922 "vhost_create_blk_controller", 00:06:15.922 "vhost_scsi_controller_remove_target", 00:06:15.922 "vhost_scsi_controller_add_target", 00:06:15.922 "vhost_start_scsi_controller", 00:06:15.922 "vhost_create_scsi_controller", 00:06:15.922 "thread_set_cpumask", 00:06:15.922 "scheduler_set_options", 00:06:15.922 "framework_get_governor", 00:06:15.922 "framework_get_scheduler", 00:06:15.922 "framework_set_scheduler", 00:06:15.922 "framework_get_reactors", 00:06:15.922 "thread_get_io_channels", 00:06:15.922 "thread_get_pollers", 00:06:15.922 "thread_get_stats", 00:06:15.922 "framework_monitor_context_switch", 00:06:15.922 "spdk_kill_instance", 00:06:15.922 "log_enable_timestamps", 00:06:15.922 "log_get_flags", 00:06:15.922 "log_clear_flag", 00:06:15.922 "log_set_flag", 00:06:15.922 "log_get_level", 00:06:15.922 "log_set_level", 00:06:15.922 "log_get_print_level", 00:06:15.922 "log_set_print_level", 00:06:15.922 "framework_enable_cpumask_locks", 00:06:15.922 "framework_disable_cpumask_locks", 00:06:15.922 "framework_wait_init", 00:06:15.922 "framework_start_init", 00:06:15.922 "scsi_get_devices", 00:06:15.922 "bdev_get_histogram", 00:06:15.922 "bdev_enable_histogram", 00:06:15.922 "bdev_set_qos_limit", 00:06:15.922 "bdev_set_qd_sampling_period", 00:06:15.922 "bdev_get_bdevs", 00:06:15.922 "bdev_reset_iostat", 00:06:15.922 "bdev_get_iostat", 00:06:15.922 "bdev_examine", 00:06:15.922 "bdev_wait_for_examine", 00:06:15.923 "bdev_set_options", 00:06:15.923 "accel_get_stats", 00:06:15.923 "accel_set_options", 00:06:15.923 "accel_set_driver", 00:06:15.923 "accel_crypto_key_destroy", 00:06:15.923 "accel_crypto_keys_get", 00:06:15.923 "accel_crypto_key_create", 00:06:15.923 "accel_assign_opc", 00:06:15.923 "accel_get_module_info", 00:06:15.923 "accel_get_opc_assignments", 00:06:15.923 "vmd_rescan", 00:06:15.923 "vmd_remove_device", 00:06:15.923 "vmd_enable", 00:06:15.923 "sock_get_default_impl", 00:06:15.923 "sock_set_default_impl", 00:06:15.923 "sock_impl_set_options", 00:06:15.923 "sock_impl_get_options", 00:06:15.923 "iobuf_get_stats", 00:06:15.923 "iobuf_set_options", 00:06:15.923 "keyring_get_keys", 00:06:15.923 "vfu_tgt_set_base_path", 00:06:15.923 "framework_get_pci_devices", 00:06:15.923 "framework_get_config", 00:06:15.923 "framework_get_subsystems", 00:06:15.923 "fsdev_set_opts", 00:06:15.923 "fsdev_get_opts", 00:06:15.923 "trace_get_info", 00:06:15.923 "trace_get_tpoint_group_mask", 00:06:15.923 "trace_disable_tpoint_group", 00:06:15.923 "trace_enable_tpoint_group", 00:06:15.923 "trace_clear_tpoint_mask", 00:06:15.923 "trace_set_tpoint_mask", 00:06:15.923 "notify_get_notifications", 00:06:15.923 "notify_get_types", 00:06:15.923 "spdk_get_version", 00:06:15.923 "rpc_get_methods" 00:06:15.923 ] 00:06:15.923 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.923 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:15.923 07:15:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 551290 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 551290 ']' 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 551290 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551290 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551290' 00:06:15.923 killing process with pid 551290 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 551290 00:06:15.923 07:15:43 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 551290 00:06:16.183 00:06:16.183 real 0m1.123s 00:06:16.183 user 0m1.902s 00:06:16.183 sys 0m0.438s 00:06:16.183 07:15:44 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.183 07:15:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.183 ************************************ 00:06:16.183 END TEST spdkcli_tcp 00:06:16.183 ************************************ 00:06:16.183 07:15:44 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:16.183 07:15:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.183 07:15:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.183 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:16.442 ************************************ 00:06:16.442 START TEST dpdk_mem_utility 00:06:16.442 ************************************ 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:16.442 * Looking for test storage... 00:06:16.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.442 07:15:44 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.442 --rc genhtml_branch_coverage=1 00:06:16.442 --rc genhtml_function_coverage=1 00:06:16.442 --rc genhtml_legend=1 00:06:16.442 --rc geninfo_all_blocks=1 00:06:16.442 --rc geninfo_unexecuted_blocks=1 00:06:16.442 00:06:16.442 ' 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.442 --rc genhtml_branch_coverage=1 00:06:16.442 --rc genhtml_function_coverage=1 00:06:16.442 --rc genhtml_legend=1 00:06:16.442 --rc geninfo_all_blocks=1 00:06:16.442 --rc geninfo_unexecuted_blocks=1 00:06:16.442 00:06:16.442 ' 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.442 --rc genhtml_branch_coverage=1 00:06:16.442 --rc genhtml_function_coverage=1 00:06:16.442 --rc genhtml_legend=1 00:06:16.442 --rc geninfo_all_blocks=1 00:06:16.442 --rc geninfo_unexecuted_blocks=1 00:06:16.442 00:06:16.442 ' 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.442 --rc genhtml_branch_coverage=1 00:06:16.442 --rc genhtml_function_coverage=1 00:06:16.442 --rc genhtml_legend=1 00:06:16.442 --rc geninfo_all_blocks=1 00:06:16.442 --rc geninfo_unexecuted_blocks=1 00:06:16.442 00:06:16.442 ' 00:06:16.442 07:15:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:16.442 07:15:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=551594 00:06:16.442 07:15:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 551594 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 551594 ']' 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.442 07:15:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.442 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.442 [2024-11-26 07:15:44.500360] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:16.442 [2024-11-26 07:15:44.500408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551594 ] 00:06:16.703 [2024-11-26 07:15:44.562490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.703 [2024-11-26 07:15:44.605294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.965 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.965 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:16.965 07:15:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:16.965 07:15:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:16.965 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.965 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.965 { 00:06:16.965 "filename": "/tmp/spdk_mem_dump.txt" 00:06:16.965 } 00:06:16.965 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.965 07:15:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:16.965 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:16.965 1 heaps totaling size 810.000000 MiB 00:06:16.965 size: 810.000000 MiB heap id: 0 00:06:16.965 end heaps---------- 00:06:16.965 9 mempools totaling size 595.772034 MiB 00:06:16.965 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:16.965 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:16.965 size: 92.545471 MiB name: bdev_io_551594 00:06:16.965 size: 50.003479 MiB name: msgpool_551594 00:06:16.965 size: 36.509338 MiB name: fsdev_io_551594 00:06:16.965 size: 21.763794 MiB name: PDU_Pool 00:06:16.965 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:16.965 size: 4.133484 MiB name: evtpool_551594 00:06:16.965 size: 0.026123 MiB name: Session_Pool 00:06:16.965 end mempools------- 00:06:16.965 6 memzones totaling size 4.142822 MiB 00:06:16.965 size: 1.000366 MiB name: RG_ring_0_551594 00:06:16.965 size: 1.000366 MiB name: RG_ring_1_551594 00:06:16.965 size: 1.000366 MiB name: RG_ring_4_551594 00:06:16.965 size: 1.000366 MiB name: RG_ring_5_551594 00:06:16.965 size: 0.125366 MiB name: RG_ring_2_551594 00:06:16.965 size: 0.015991 MiB name: RG_ring_3_551594 00:06:16.965 end memzones------- 00:06:16.965 07:15:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:16.965 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:16.965 list of free elements. size: 10.862488 MiB 00:06:16.965 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:16.965 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:16.965 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:16.965 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:16.965 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:16.965 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:16.965 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:16.965 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:16.965 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:16.965 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:16.965 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:16.965 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:16.965 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:16.965 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:16.965 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:16.965 list of standard malloc elements. size: 199.218628 MiB 00:06:16.965 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:16.965 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:16.965 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:16.965 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:16.965 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:16.965 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:16.965 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:16.965 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:16.965 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:16.965 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:16.965 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:16.965 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:16.965 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:16.965 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:16.965 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:16.965 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:16.965 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:16.965 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:16.965 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:16.965 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:16.965 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:16.965 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:16.965 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:16.965 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:16.965 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:16.965 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:16.965 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:16.965 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:16.965 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:16.965 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:16.965 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:16.965 list of memzone associated elements. size: 599.918884 MiB 00:06:16.965 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:16.965 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:16.965 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:16.965 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:16.965 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:16.966 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_551594_0 00:06:16.966 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:16.966 associated memzone info: size: 48.002930 MiB name: MP_msgpool_551594_0 00:06:16.966 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:16.966 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_551594_0 00:06:16.966 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:16.966 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:16.966 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:16.966 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:16.966 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:16.966 associated memzone info: size: 3.000122 MiB name: MP_evtpool_551594_0 00:06:16.966 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:16.966 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_551594 00:06:16.966 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:16.966 associated memzone info: size: 1.007996 MiB name: MP_evtpool_551594 00:06:16.966 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:16.966 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:16.966 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:16.966 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:16.966 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:16.966 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:16.966 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:16.966 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:16.966 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:16.966 associated memzone info: size: 1.000366 MiB name: RG_ring_0_551594 00:06:16.966 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:16.966 associated memzone info: size: 1.000366 MiB name: RG_ring_1_551594 00:06:16.966 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:16.966 associated memzone info: size: 1.000366 MiB name: RG_ring_4_551594 00:06:16.966 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:16.966 associated memzone info: size: 1.000366 MiB name: RG_ring_5_551594 00:06:16.966 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:16.966 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_551594 00:06:16.966 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:16.966 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_551594 00:06:16.966 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:16.966 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:16.966 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:16.966 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:16.966 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:16.966 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:16.966 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:16.966 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_551594 00:06:16.966 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:16.966 associated memzone info: size: 0.125366 MiB name: RG_ring_2_551594 00:06:16.966 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:16.966 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:16.966 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:16.966 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:16.966 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:16.966 associated memzone info: size: 0.015991 MiB name: RG_ring_3_551594 00:06:16.966 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:16.966 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:16.966 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:16.966 associated memzone info: size: 0.000183 MiB name: MP_msgpool_551594 00:06:16.966 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:16.966 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_551594 00:06:16.966 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:16.966 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_551594 00:06:16.966 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:16.966 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:16.966 07:15:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:16.966 07:15:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 551594 00:06:16.966 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 551594 ']' 00:06:16.966 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 551594 00:06:16.966 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:16.966 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.966 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551594 00:06:16.966 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.966 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.966 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551594' 00:06:16.966 killing process with pid 551594 00:06:16.966 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 551594 00:06:16.966 07:15:44 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 551594 00:06:17.229 00:06:17.229 real 0m0.973s 00:06:17.229 user 0m0.925s 00:06:17.229 sys 0m0.375s 00:06:17.229 07:15:45 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.229 07:15:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.229 ************************************ 00:06:17.229 END TEST dpdk_mem_utility 00:06:17.229 ************************************ 00:06:17.229 07:15:45 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:17.229 07:15:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.229 07:15:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.229 07:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:17.229 ************************************ 00:06:17.229 START TEST event 00:06:17.229 ************************************ 00:06:17.229 07:15:45 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:17.488 * Looking for test storage... 00:06:17.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:17.488 07:15:45 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.488 07:15:45 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.488 07:15:45 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.488 07:15:45 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.488 07:15:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.488 07:15:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.488 07:15:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.488 07:15:45 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.488 07:15:45 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.488 07:15:45 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.488 07:15:45 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.488 07:15:45 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.488 07:15:45 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.489 07:15:45 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.489 07:15:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.489 07:15:45 event -- scripts/common.sh@344 -- # case "$op" in 00:06:17.489 07:15:45 event -- scripts/common.sh@345 -- # : 1 00:06:17.489 07:15:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.489 07:15:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.489 07:15:45 event -- scripts/common.sh@365 -- # decimal 1 00:06:17.489 07:15:45 event -- scripts/common.sh@353 -- # local d=1 00:06:17.489 07:15:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.489 07:15:45 event -- scripts/common.sh@355 -- # echo 1 00:06:17.489 07:15:45 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.489 07:15:45 event -- scripts/common.sh@366 -- # decimal 2 00:06:17.489 07:15:45 event -- scripts/common.sh@353 -- # local d=2 00:06:17.489 07:15:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.489 07:15:45 event -- scripts/common.sh@355 -- # echo 2 00:06:17.489 07:15:45 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.489 07:15:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.489 07:15:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.489 07:15:45 event -- scripts/common.sh@368 -- # return 0 00:06:17.489 07:15:45 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.489 07:15:45 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.489 --rc genhtml_branch_coverage=1 00:06:17.489 --rc genhtml_function_coverage=1 00:06:17.489 --rc genhtml_legend=1 00:06:17.489 --rc geninfo_all_blocks=1 00:06:17.489 --rc geninfo_unexecuted_blocks=1 00:06:17.489 00:06:17.489 ' 00:06:17.489 07:15:45 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.489 --rc genhtml_branch_coverage=1 00:06:17.489 --rc genhtml_function_coverage=1 00:06:17.489 --rc genhtml_legend=1 00:06:17.489 --rc geninfo_all_blocks=1 00:06:17.489 --rc geninfo_unexecuted_blocks=1 00:06:17.489 00:06:17.489 ' 00:06:17.489 07:15:45 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.489 --rc genhtml_branch_coverage=1 00:06:17.489 --rc genhtml_function_coverage=1 00:06:17.489 --rc genhtml_legend=1 00:06:17.489 --rc geninfo_all_blocks=1 00:06:17.489 --rc geninfo_unexecuted_blocks=1 00:06:17.489 00:06:17.489 ' 00:06:17.489 07:15:45 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.489 --rc genhtml_branch_coverage=1 00:06:17.489 --rc genhtml_function_coverage=1 00:06:17.489 --rc genhtml_legend=1 00:06:17.489 --rc geninfo_all_blocks=1 00:06:17.489 --rc geninfo_unexecuted_blocks=1 00:06:17.489 00:06:17.489 ' 00:06:17.489 07:15:45 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:17.489 07:15:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.489 07:15:45 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.489 07:15:45 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:17.489 07:15:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.489 07:15:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.489 ************************************ 00:06:17.489 START TEST event_perf 00:06:17.489 ************************************ 00:06:17.489 07:15:45 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.489 Running I/O for 1 seconds...[2024-11-26 07:15:45.549547] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:17.489 [2024-11-26 07:15:45.549615] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551884 ] 00:06:17.748 [2024-11-26 07:15:45.614207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.748 [2024-11-26 07:15:45.658597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.748 [2024-11-26 07:15:45.658695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.748 [2024-11-26 07:15:45.658778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.748 [2024-11-26 07:15:45.658780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.686 Running I/O for 1 seconds... 00:06:18.686 lcore 0: 203191 00:06:18.686 lcore 1: 203193 00:06:18.686 lcore 2: 203192 00:06:18.686 lcore 3: 203192 00:06:18.686 done. 00:06:18.686 00:06:18.686 real 0m1.170s 00:06:18.686 user 0m4.101s 00:06:18.686 sys 0m0.066s 00:06:18.686 07:15:46 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.686 07:15:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.686 ************************************ 00:06:18.686 END TEST event_perf 00:06:18.686 ************************************ 00:06:18.686 07:15:46 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:18.686 07:15:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:18.686 07:15:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.686 07:15:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.686 ************************************ 00:06:18.686 START TEST event_reactor 00:06:18.686 ************************************ 00:06:18.686 07:15:46 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:18.945 [2024-11-26 07:15:46.790798] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:18.945 [2024-11-26 07:15:46.790866] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552133 ] 00:06:18.945 [2024-11-26 07:15:46.857340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.945 [2024-11-26 07:15:46.899395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.879 test_start 00:06:19.879 oneshot 00:06:19.879 tick 100 00:06:19.879 tick 100 00:06:19.879 tick 250 00:06:19.879 tick 100 00:06:19.879 tick 100 00:06:19.879 tick 100 00:06:19.879 tick 250 00:06:19.879 tick 500 00:06:19.879 tick 100 00:06:19.879 tick 100 00:06:19.879 tick 250 00:06:19.879 tick 100 00:06:19.879 tick 100 00:06:19.879 test_end 00:06:19.879 00:06:19.879 real 0m1.169s 00:06:19.879 user 0m1.101s 00:06:19.879 sys 0m0.064s 00:06:19.879 07:15:47 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.879 07:15:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:19.879 ************************************ 00:06:19.879 END TEST event_reactor 00:06:19.879 ************************************ 00:06:19.879 07:15:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.879 07:15:47 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:19.879 07:15:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.879 07:15:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.138 ************************************ 00:06:20.139 START TEST event_reactor_perf 00:06:20.139 ************************************ 00:06:20.139 07:15:48 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.139 [2024-11-26 07:15:48.031059] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:20.139 [2024-11-26 07:15:48.031128] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552362 ] 00:06:20.139 [2024-11-26 07:15:48.098092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.139 [2024-11-26 07:15:48.138310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.517 test_start 00:06:21.517 test_end 00:06:21.517 Performance: 505737 events per second 00:06:21.517 00:06:21.517 real 0m1.168s 00:06:21.517 user 0m1.092s 00:06:21.517 sys 0m0.073s 00:06:21.517 07:15:49 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.517 07:15:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.517 ************************************ 00:06:21.517 END TEST event_reactor_perf 00:06:21.517 ************************************ 00:06:21.517 07:15:49 event -- event/event.sh@49 -- # uname -s 00:06:21.517 07:15:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.517 07:15:49 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:21.517 07:15:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.517 07:15:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.517 07:15:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.517 ************************************ 00:06:21.517 START TEST event_scheduler 00:06:21.517 ************************************ 00:06:21.517 07:15:49 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:21.517 * Looking for test storage... 00:06:21.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:21.517 07:15:49 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.517 07:15:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.517 07:15:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.517 07:15:49 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.517 07:15:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:21.517 07:15:49 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.517 07:15:49 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.517 --rc genhtml_branch_coverage=1 00:06:21.517 --rc genhtml_function_coverage=1 00:06:21.517 --rc genhtml_legend=1 00:06:21.517 --rc geninfo_all_blocks=1 00:06:21.517 --rc geninfo_unexecuted_blocks=1 00:06:21.517 00:06:21.517 ' 00:06:21.517 07:15:49 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.517 --rc genhtml_branch_coverage=1 00:06:21.517 --rc genhtml_function_coverage=1 00:06:21.517 --rc genhtml_legend=1 00:06:21.517 --rc geninfo_all_blocks=1 00:06:21.517 --rc geninfo_unexecuted_blocks=1 00:06:21.517 00:06:21.517 ' 00:06:21.517 07:15:49 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.517 --rc genhtml_branch_coverage=1 00:06:21.517 --rc genhtml_function_coverage=1 00:06:21.518 --rc genhtml_legend=1 00:06:21.518 --rc geninfo_all_blocks=1 00:06:21.518 --rc geninfo_unexecuted_blocks=1 00:06:21.518 00:06:21.518 ' 00:06:21.518 07:15:49 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.518 --rc genhtml_branch_coverage=1 00:06:21.518 --rc genhtml_function_coverage=1 00:06:21.518 --rc genhtml_legend=1 00:06:21.518 --rc geninfo_all_blocks=1 00:06:21.518 --rc geninfo_unexecuted_blocks=1 00:06:21.518 00:06:21.518 ' 00:06:21.518 07:15:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.518 07:15:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.518 07:15:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=552675 00:06:21.518 07:15:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.518 07:15:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 552675 00:06:21.518 07:15:49 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 552675 ']' 00:06:21.518 07:15:49 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.518 07:15:49 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.518 07:15:49 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.518 07:15:49 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.518 07:15:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.518 [2024-11-26 07:15:49.465983] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:21.518 [2024-11-26 07:15:49.466032] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552675 ] 00:06:21.518 [2024-11-26 07:15:49.526515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.518 [2024-11-26 07:15:49.573142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.518 [2024-11-26 07:15:49.573227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.518 [2024-11-26 07:15:49.573244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.518 [2024-11-26 07:15:49.573263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.777 07:15:49 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.777 07:15:49 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:21.777 07:15:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:21.777 07:15:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.777 07:15:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.777 [2024-11-26 07:15:49.649898] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:21.777 [2024-11-26 07:15:49.649913] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:21.777 [2024-11-26 07:15:49.649923] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:21.778 [2024-11-26 07:15:49.649928] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:21.778 [2024-11-26 07:15:49.649933] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:21.778 07:15:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:21.778 07:15:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 [2024-11-26 07:15:49.723546] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.778 07:15:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.778 07:15:49 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.778 07:15:49 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 ************************************ 00:06:21.778 START TEST scheduler_create_thread 00:06:21.778 ************************************ 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 2 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 3 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 4 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 5 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 6 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 7 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 8 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 9 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 10 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.778 07:15:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.716 07:15:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.716 07:15:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:22.716 07:15:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.716 07:15:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.096 07:15:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.096 07:15:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:24.096 07:15:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:24.096 07:15:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.096 07:15:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.474 07:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.474 00:06:25.474 real 0m3.381s 00:06:25.474 user 0m0.025s 00:06:25.474 sys 0m0.004s 00:06:25.474 07:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.474 07:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.474 ************************************ 00:06:25.474 END TEST scheduler_create_thread 00:06:25.474 ************************************ 00:06:25.474 07:15:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:25.474 07:15:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 552675 00:06:25.474 07:15:53 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 552675 ']' 00:06:25.474 07:15:53 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 552675 00:06:25.474 07:15:53 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:25.474 07:15:53 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.475 07:15:53 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 552675 00:06:25.475 07:15:53 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:25.475 07:15:53 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:25.475 07:15:53 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 552675' 00:06:25.475 killing process with pid 552675 00:06:25.475 07:15:53 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 552675 00:06:25.475 07:15:53 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 552675 00:06:25.475 [2024-11-26 07:15:53.519580] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:25.734 00:06:25.734 real 0m4.466s 00:06:25.734 user 0m7.924s 00:06:25.734 sys 0m0.338s 00:06:25.734 07:15:53 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.734 07:15:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.734 ************************************ 00:06:25.734 END TEST event_scheduler 00:06:25.734 ************************************ 00:06:25.734 07:15:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:25.734 07:15:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:25.734 07:15:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.734 07:15:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.734 07:15:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.734 ************************************ 00:06:25.734 START TEST app_repeat 00:06:25.734 ************************************ 00:06:25.734 07:15:53 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:25.734 07:15:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.734 07:15:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.734 07:15:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:25.734 07:15:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.734 07:15:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:25.734 07:15:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:25.734 07:15:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:25.734 07:15:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=553415 00:06:25.735 07:15:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.735 07:15:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:25.735 07:15:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 553415' 00:06:25.735 Process app_repeat pid: 553415 00:06:25.735 07:15:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.735 07:15:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:25.735 spdk_app_start Round 0 00:06:25.735 07:15:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 553415 /var/tmp/spdk-nbd.sock 00:06:25.735 07:15:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 553415 ']' 00:06:25.735 07:15:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.735 07:15:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.735 07:15:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.735 07:15:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.735 07:15:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.994 [2024-11-26 07:15:53.835993] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:25.994 [2024-11-26 07:15:53.836046] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid553415 ] 00:06:25.994 [2024-11-26 07:15:53.902118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.994 [2024-11-26 07:15:53.948965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.994 [2024-11-26 07:15:53.948969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.994 07:15:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.994 07:15:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:25.994 07:15:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.253 Malloc0 00:06:26.253 07:15:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.512 Malloc1 00:06:26.513 07:15:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.513 07:15:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.773 /dev/nbd0 00:06:26.773 07:15:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.773 07:15:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.773 1+0 records in 00:06:26.773 1+0 records out 00:06:26.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185877 s, 22.0 MB/s 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.773 07:15:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:26.773 07:15:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.773 07:15:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.773 07:15:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.032 /dev/nbd1 00:06:27.032 07:15:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.032 07:15:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.032 1+0 records in 00:06:27.032 1+0 records out 00:06:27.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155658 s, 26.3 MB/s 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.032 07:15:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:27.032 07:15:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.032 07:15:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.032 07:15:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.032 07:15:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.032 07:15:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.291 { 00:06:27.291 "nbd_device": "/dev/nbd0", 00:06:27.291 "bdev_name": "Malloc0" 00:06:27.291 }, 00:06:27.291 { 00:06:27.291 "nbd_device": "/dev/nbd1", 00:06:27.291 "bdev_name": "Malloc1" 00:06:27.291 } 00:06:27.291 ]' 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.291 { 00:06:27.291 "nbd_device": "/dev/nbd0", 00:06:27.291 "bdev_name": "Malloc0" 00:06:27.291 }, 00:06:27.291 { 00:06:27.291 "nbd_device": "/dev/nbd1", 00:06:27.291 "bdev_name": "Malloc1" 00:06:27.291 } 00:06:27.291 ]' 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.291 /dev/nbd1' 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.291 /dev/nbd1' 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.291 07:15:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.292 256+0 records in 00:06:27.292 256+0 records out 00:06:27.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106757 s, 98.2 MB/s 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.292 256+0 records in 00:06:27.292 256+0 records out 00:06:27.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140871 s, 74.4 MB/s 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.292 256+0 records in 00:06:27.292 256+0 records out 00:06:27.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151198 s, 69.4 MB/s 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.292 07:15:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.551 07:15:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.551 07:15:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.551 07:15:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.551 07:15:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.551 07:15:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.551 07:15:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.551 07:15:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.551 07:15:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.551 07:15:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.551 07:15:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.810 07:15:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.069 07:15:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.069 07:15:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.069 07:15:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.069 07:15:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.069 07:15:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.069 07:15:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.069 07:15:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.069 07:15:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.328 [2024-11-26 07:15:56.280482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.328 [2024-11-26 07:15:56.318320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.328 [2024-11-26 07:15:56.318323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.328 [2024-11-26 07:15:56.359155] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.328 [2024-11-26 07:15:56.359195] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.617 07:15:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.617 07:15:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:31.617 spdk_app_start Round 1 00:06:31.617 07:15:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 553415 /var/tmp/spdk-nbd.sock 00:06:31.617 07:15:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 553415 ']' 00:06:31.617 07:15:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.617 07:15:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.617 07:15:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.617 07:15:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.617 07:15:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.617 07:15:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.617 07:15:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:31.617 07:15:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.617 Malloc0 00:06:31.617 07:15:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.617 Malloc1 00:06:31.617 07:15:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.617 07:15:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.876 /dev/nbd0 00:06:31.876 07:15:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.876 07:15:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.876 1+0 records in 00:06:31.876 1+0 records out 00:06:31.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00246096 s, 1.7 MB/s 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.876 07:15:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:31.876 07:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.876 07:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.876 07:15:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.134 /dev/nbd1 00:06:32.134 07:16:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.134 07:16:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.134 1+0 records in 00:06:32.134 1+0 records out 00:06:32.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191065 s, 21.4 MB/s 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.134 07:16:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.134 07:16:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.134 07:16:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.134 07:16:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.134 07:16:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.134 07:16:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.394 { 00:06:32.394 "nbd_device": "/dev/nbd0", 00:06:32.394 "bdev_name": "Malloc0" 00:06:32.394 }, 00:06:32.394 { 00:06:32.394 "nbd_device": "/dev/nbd1", 00:06:32.394 "bdev_name": "Malloc1" 00:06:32.394 } 00:06:32.394 ]' 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.394 { 00:06:32.394 "nbd_device": "/dev/nbd0", 00:06:32.394 "bdev_name": "Malloc0" 00:06:32.394 }, 00:06:32.394 { 00:06:32.394 "nbd_device": "/dev/nbd1", 00:06:32.394 "bdev_name": "Malloc1" 00:06:32.394 } 00:06:32.394 ]' 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.394 /dev/nbd1' 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.394 /dev/nbd1' 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.394 256+0 records in 00:06:32.394 256+0 records out 00:06:32.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010205 s, 103 MB/s 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.394 256+0 records in 00:06:32.394 256+0 records out 00:06:32.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141247 s, 74.2 MB/s 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.394 07:16:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.653 256+0 records in 00:06:32.653 256+0 records out 00:06:32.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149382 s, 70.2 MB/s 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.653 07:16:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.912 07:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.912 07:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.912 07:16:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.912 07:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.912 07:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.912 07:16:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.912 07:16:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.912 07:16:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.912 07:16:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.912 07:16:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.912 07:16:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.170 07:16:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.170 07:16:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.429 07:16:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.688 [2024-11-26 07:16:01.548710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.688 [2024-11-26 07:16:01.587339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.688 [2024-11-26 07:16:01.587343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.688 [2024-11-26 07:16:01.629077] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.688 [2024-11-26 07:16:01.629118] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.975 07:16:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.975 07:16:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:36.975 spdk_app_start Round 2 00:06:36.975 07:16:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 553415 /var/tmp/spdk-nbd.sock 00:06:36.975 07:16:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 553415 ']' 00:06:36.975 07:16:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.975 07:16:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.975 07:16:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.975 07:16:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.975 07:16:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.975 07:16:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.975 07:16:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:36.975 07:16:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.975 Malloc0 00:06:36.975 07:16:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.975 Malloc1 00:06:36.975 07:16:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.975 07:16:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.234 /dev/nbd0 00:06:37.234 07:16:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.234 07:16:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.234 1+0 records in 00:06:37.234 1+0 records out 00:06:37.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190787 s, 21.5 MB/s 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.234 07:16:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:37.234 07:16:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.234 07:16:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.234 07:16:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.493 /dev/nbd1 00:06:37.493 07:16:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.493 07:16:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.493 1+0 records in 00:06:37.493 1+0 records out 00:06:37.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000140805 s, 29.1 MB/s 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.493 07:16:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:37.493 07:16:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.493 07:16:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.493 07:16:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.493 07:16:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.493 07:16:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.752 { 00:06:37.752 "nbd_device": "/dev/nbd0", 00:06:37.752 "bdev_name": "Malloc0" 00:06:37.752 }, 00:06:37.752 { 00:06:37.752 "nbd_device": "/dev/nbd1", 00:06:37.752 "bdev_name": "Malloc1" 00:06:37.752 } 00:06:37.752 ]' 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.752 { 00:06:37.752 "nbd_device": "/dev/nbd0", 00:06:37.752 "bdev_name": "Malloc0" 00:06:37.752 }, 00:06:37.752 { 00:06:37.752 "nbd_device": "/dev/nbd1", 00:06:37.752 "bdev_name": "Malloc1" 00:06:37.752 } 00:06:37.752 ]' 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.752 /dev/nbd1' 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.752 /dev/nbd1' 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.752 256+0 records in 00:06:37.752 256+0 records out 00:06:37.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00994914 s, 105 MB/s 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.752 256+0 records in 00:06:37.752 256+0 records out 00:06:37.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137312 s, 76.4 MB/s 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.752 256+0 records in 00:06:37.752 256+0 records out 00:06:37.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015221 s, 68.9 MB/s 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.752 07:16:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.011 07:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.011 07:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.011 07:16:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.011 07:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.011 07:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.011 07:16:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.011 07:16:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.011 07:16:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.011 07:16:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.011 07:16:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.269 07:16:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.269 07:16:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.269 07:16:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.269 07:16:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.269 07:16:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.269 07:16:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.269 07:16:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.269 07:16:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.269 07:16:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.269 07:16:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.269 07:16:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.528 07:16:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.528 07:16:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.788 07:16:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.788 [2024-11-26 07:16:06.812898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.788 [2024-11-26 07:16:06.850271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.788 [2024-11-26 07:16:06.850273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.047 [2024-11-26 07:16:06.891609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.047 [2024-11-26 07:16:06.891646] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.578 07:16:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 553415 /var/tmp/spdk-nbd.sock 00:06:41.578 07:16:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 553415 ']' 00:06:41.578 07:16:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.578 07:16:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.578 07:16:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.578 07:16:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.578 07:16:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:41.837 07:16:09 event.app_repeat -- event/event.sh@39 -- # killprocess 553415 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 553415 ']' 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 553415 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 553415 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 553415' 00:06:41.837 killing process with pid 553415 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@973 -- # kill 553415 00:06:41.837 07:16:09 event.app_repeat -- common/autotest_common.sh@978 -- # wait 553415 00:06:42.095 spdk_app_start is called in Round 0. 00:06:42.095 Shutdown signal received, stop current app iteration 00:06:42.095 Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 reinitialization... 00:06:42.095 spdk_app_start is called in Round 1. 00:06:42.095 Shutdown signal received, stop current app iteration 00:06:42.096 Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 reinitialization... 00:06:42.096 spdk_app_start is called in Round 2. 00:06:42.096 Shutdown signal received, stop current app iteration 00:06:42.096 Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 reinitialization... 00:06:42.096 spdk_app_start is called in Round 3. 00:06:42.096 Shutdown signal received, stop current app iteration 00:06:42.096 07:16:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:42.096 07:16:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:42.096 00:06:42.096 real 0m16.252s 00:06:42.096 user 0m35.622s 00:06:42.096 sys 0m2.485s 00:06:42.096 07:16:10 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.096 07:16:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.096 ************************************ 00:06:42.096 END TEST app_repeat 00:06:42.096 ************************************ 00:06:42.096 07:16:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:42.096 07:16:10 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:42.096 07:16:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.096 07:16:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.096 07:16:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.096 ************************************ 00:06:42.096 START TEST cpu_locks 00:06:42.096 ************************************ 00:06:42.096 07:16:10 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:42.096 * Looking for test storage... 00:06:42.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:42.096 07:16:10 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.096 07:16:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.096 07:16:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.354 07:16:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.354 07:16:10 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.354 07:16:10 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.354 07:16:10 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.354 07:16:10 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.354 07:16:10 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.354 07:16:10 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.354 07:16:10 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.354 07:16:10 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.354 07:16:10 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.354 07:16:10 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.354 07:16:10 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.355 07:16:10 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:42.355 07:16:10 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.355 07:16:10 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.355 --rc genhtml_branch_coverage=1 00:06:42.355 --rc genhtml_function_coverage=1 00:06:42.355 --rc genhtml_legend=1 00:06:42.355 --rc geninfo_all_blocks=1 00:06:42.355 --rc geninfo_unexecuted_blocks=1 00:06:42.355 00:06:42.355 ' 00:06:42.355 07:16:10 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.355 --rc genhtml_branch_coverage=1 00:06:42.355 --rc genhtml_function_coverage=1 00:06:42.355 --rc genhtml_legend=1 00:06:42.355 --rc geninfo_all_blocks=1 00:06:42.355 --rc geninfo_unexecuted_blocks=1 00:06:42.355 00:06:42.355 ' 00:06:42.355 07:16:10 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.355 --rc genhtml_branch_coverage=1 00:06:42.355 --rc genhtml_function_coverage=1 00:06:42.355 --rc genhtml_legend=1 00:06:42.355 --rc geninfo_all_blocks=1 00:06:42.355 --rc geninfo_unexecuted_blocks=1 00:06:42.355 00:06:42.355 ' 00:06:42.355 07:16:10 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.355 --rc genhtml_branch_coverage=1 00:06:42.355 --rc genhtml_function_coverage=1 00:06:42.355 --rc genhtml_legend=1 00:06:42.355 --rc geninfo_all_blocks=1 00:06:42.355 --rc geninfo_unexecuted_blocks=1 00:06:42.355 00:06:42.355 ' 00:06:42.355 07:16:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:42.355 07:16:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:42.355 07:16:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:42.355 07:16:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:42.355 07:16:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.355 07:16:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.355 07:16:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.355 ************************************ 00:06:42.355 START TEST default_locks 00:06:42.355 ************************************ 00:06:42.355 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:42.355 07:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=556413 00:06:42.355 07:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 556413 00:06:42.355 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 556413 ']' 00:06:42.355 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.355 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.355 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.355 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.355 07:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.355 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.355 [2024-11-26 07:16:10.330022] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:42.355 [2024-11-26 07:16:10.330063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid556413 ] 00:06:42.355 [2024-11-26 07:16:10.392278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.355 [2024-11-26 07:16:10.433134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.614 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.614 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:42.614 07:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 556413 00:06:42.614 07:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 556413 00:06:42.614 07:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.182 lslocks: write error 00:06:43.182 07:16:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 556413 00:06:43.182 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 556413 ']' 00:06:43.182 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 556413 00:06:43.182 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:43.182 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.182 07:16:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 556413 00:06:43.182 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.182 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.182 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 556413' 00:06:43.182 killing process with pid 556413 00:06:43.182 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 556413 00:06:43.182 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 556413 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 556413 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 556413 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 556413 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 556413 ']' 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (556413) - No such process 00:06:43.442 ERROR: process (pid: 556413) is no longer running 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.442 00:06:43.442 real 0m1.067s 00:06:43.442 user 0m1.024s 00:06:43.442 sys 0m0.488s 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.442 07:16:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.442 ************************************ 00:06:43.442 END TEST default_locks 00:06:43.442 ************************************ 00:06:43.442 07:16:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:43.442 07:16:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.442 07:16:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.442 07:16:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.442 ************************************ 00:06:43.442 START TEST default_locks_via_rpc 00:06:43.442 ************************************ 00:06:43.442 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:43.442 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=556671 00:06:43.442 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 556671 00:06:43.442 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 556671 ']' 00:06:43.442 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.442 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.442 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.442 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.442 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.442 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.442 [2024-11-26 07:16:11.459172] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:43.442 [2024-11-26 07:16:11.459215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid556671 ] 00:06:43.442 [2024-11-26 07:16:11.520886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.702 [2024-11-26 07:16:11.564663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 556671 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 556671 00:06:43.702 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.962 07:16:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 556671 00:06:43.962 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 556671 ']' 00:06:43.962 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 556671 00:06:43.962 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:43.962 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.962 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 556671 00:06:43.962 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.962 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.962 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 556671' 00:06:43.962 killing process with pid 556671 00:06:43.962 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 556671 00:06:43.962 07:16:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 556671 00:06:44.222 00:06:44.222 real 0m0.839s 00:06:44.222 user 0m0.785s 00:06:44.222 sys 0m0.384s 00:06:44.222 07:16:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.222 07:16:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.222 ************************************ 00:06:44.222 END TEST default_locks_via_rpc 00:06:44.222 ************************************ 00:06:44.222 07:16:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:44.222 07:16:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.222 07:16:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.222 07:16:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.222 ************************************ 00:06:44.222 START TEST non_locking_app_on_locked_coremask 00:06:44.222 ************************************ 00:06:44.222 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:44.222 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=556709 00:06:44.222 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 556709 /var/tmp/spdk.sock 00:06:44.222 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.222 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 556709 ']' 00:06:44.222 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.222 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.222 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.222 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.222 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.483 [2024-11-26 07:16:12.346006] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:44.483 [2024-11-26 07:16:12.346045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid556709 ] 00:06:44.483 [2024-11-26 07:16:12.407200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.483 [2024-11-26 07:16:12.451609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.742 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.742 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.742 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=556875 00:06:44.742 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 556875 /var/tmp/spdk2.sock 00:06:44.742 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 556875 ']' 00:06:44.742 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.742 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.742 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.743 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:44.743 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.743 07:16:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.743 [2024-11-26 07:16:12.708276] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:44.743 [2024-11-26 07:16:12.708327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid556875 ] 00:06:44.743 [2024-11-26 07:16:12.801031] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.743 [2024-11-26 07:16:12.801058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.002 [2024-11-26 07:16:12.890670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.570 07:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.570 07:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.570 07:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 556709 00:06:45.570 07:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 556709 00:06:45.570 07:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.139 lslocks: write error 00:06:46.139 07:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 556709 00:06:46.139 07:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 556709 ']' 00:06:46.139 07:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 556709 00:06:46.139 07:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.139 07:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.139 07:16:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 556709 00:06:46.139 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.139 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.139 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 556709' 00:06:46.139 killing process with pid 556709 00:06:46.139 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 556709 00:06:46.139 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 556709 00:06:46.706 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 556875 00:06:46.706 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 556875 ']' 00:06:46.706 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 556875 00:06:46.707 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.707 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.707 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 556875 00:06:46.707 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.707 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.707 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 556875' 00:06:46.707 killing process with pid 556875 00:06:46.707 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 556875 00:06:46.707 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 556875 00:06:46.965 00:06:46.965 real 0m2.685s 00:06:46.965 user 0m2.864s 00:06:46.965 sys 0m0.867s 00:06:46.965 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.965 07:16:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.965 ************************************ 00:06:46.965 END TEST non_locking_app_on_locked_coremask 00:06:46.965 ************************************ 00:06:46.965 07:16:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:46.965 07:16:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.965 07:16:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.965 07:16:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.965 ************************************ 00:06:46.965 START TEST locking_app_on_unlocked_coremask 00:06:46.965 ************************************ 00:06:46.965 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:46.965 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=557214 00:06:46.965 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 557214 /var/tmp/spdk.sock 00:06:46.965 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:46.965 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 557214 ']' 00:06:46.965 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.965 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.965 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.965 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.965 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.224 [2024-11-26 07:16:15.111989] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:47.224 [2024-11-26 07:16:15.112034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557214 ] 00:06:47.224 [2024-11-26 07:16:15.174527] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.224 [2024-11-26 07:16:15.174550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.224 [2024-11-26 07:16:15.213746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.482 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.482 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.482 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=557380 00:06:47.482 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 557380 /var/tmp/spdk2.sock 00:06:47.482 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.482 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 557380 ']' 00:06:47.482 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.482 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.482 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.482 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.482 07:16:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.483 [2024-11-26 07:16:15.477327] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:47.483 [2024-11-26 07:16:15.477378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557380 ] 00:06:47.483 [2024-11-26 07:16:15.567310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.741 [2024-11-26 07:16:15.648078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.306 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.306 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:48.306 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 557380 00:06:48.306 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 557380 00:06:48.306 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.872 lslocks: write error 00:06:48.872 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 557214 00:06:48.872 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 557214 ']' 00:06:48.872 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 557214 00:06:48.872 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:48.872 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.872 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 557214 00:06:48.872 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.872 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.872 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 557214' 00:06:48.872 killing process with pid 557214 00:06:48.872 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 557214 00:06:48.872 07:16:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 557214 00:06:49.440 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 557380 00:06:49.440 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 557380 ']' 00:06:49.440 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 557380 00:06:49.440 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.440 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.440 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 557380 00:06:49.440 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.440 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.440 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 557380' 00:06:49.440 killing process with pid 557380 00:06:49.440 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 557380 00:06:49.440 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 557380 00:06:49.700 00:06:49.700 real 0m2.653s 00:06:49.700 user 0m2.811s 00:06:49.700 sys 0m0.893s 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.700 ************************************ 00:06:49.700 END TEST locking_app_on_unlocked_coremask 00:06:49.700 ************************************ 00:06:49.700 07:16:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:49.700 07:16:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.700 07:16:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.700 07:16:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.700 ************************************ 00:06:49.700 START TEST locking_app_on_locked_coremask 00:06:49.700 ************************************ 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=557711 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 557711 /var/tmp/spdk.sock 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 557711 ']' 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.700 07:16:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.959 [2024-11-26 07:16:17.823443] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:49.959 [2024-11-26 07:16:17.823486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557711 ] 00:06:49.959 [2024-11-26 07:16:17.885697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.959 [2024-11-26 07:16:17.924109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=557853 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 557853 /var/tmp/spdk2.sock 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 557853 /var/tmp/spdk2.sock 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 557853 /var/tmp/spdk2.sock 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 557853 ']' 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.218 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.218 [2024-11-26 07:16:18.183978] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:50.218 [2024-11-26 07:16:18.184029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid557853 ] 00:06:50.218 [2024-11-26 07:16:18.278127] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 557711 has claimed it. 00:06:50.218 [2024-11-26 07:16:18.278169] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (557853) - No such process 00:06:50.786 ERROR: process (pid: 557853) is no longer running 00:06:50.786 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.786 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:50.786 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:50.786 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.786 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.786 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.786 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 557711 00:06:50.786 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 557711 00:06:50.786 07:16:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.354 lslocks: write error 00:06:51.354 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 557711 00:06:51.354 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 557711 ']' 00:06:51.354 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 557711 00:06:51.354 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:51.354 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.354 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 557711 00:06:51.354 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.354 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.354 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 557711' 00:06:51.354 killing process with pid 557711 00:06:51.354 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 557711 00:06:51.354 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 557711 00:06:51.612 00:06:51.612 real 0m1.797s 00:06:51.612 user 0m1.971s 00:06:51.612 sys 0m0.599s 00:06:51.612 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.612 07:16:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.612 ************************************ 00:06:51.612 END TEST locking_app_on_locked_coremask 00:06:51.612 ************************************ 00:06:51.612 07:16:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:51.612 07:16:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.612 07:16:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.612 07:16:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.612 ************************************ 00:06:51.612 START TEST locking_overlapped_coremask 00:06:51.612 ************************************ 00:06:51.612 07:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:51.612 07:16:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=558189 00:06:51.612 07:16:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 558189 /var/tmp/spdk.sock 00:06:51.612 07:16:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:51.613 07:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 558189 ']' 00:06:51.613 07:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.613 07:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.613 07:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.613 07:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.613 07:16:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.613 [2024-11-26 07:16:19.694336] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:51.613 [2024-11-26 07:16:19.694382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid558189 ] 00:06:51.872 [2024-11-26 07:16:19.756327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.872 [2024-11-26 07:16:19.798714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.872 [2024-11-26 07:16:19.798813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.872 [2024-11-26 07:16:19.798816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=558200 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 558200 /var/tmp/spdk2.sock 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 558200 /var/tmp/spdk2.sock 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 558200 /var/tmp/spdk2.sock 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 558200 ']' 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.131 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.131 [2024-11-26 07:16:20.061337] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:52.131 [2024-11-26 07:16:20.061387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid558200 ] 00:06:52.131 [2024-11-26 07:16:20.155508] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 558189 has claimed it. 00:06:52.131 [2024-11-26 07:16:20.155549] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:52.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (558200) - No such process 00:06:52.700 ERROR: process (pid: 558200) is no longer running 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 558189 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 558189 ']' 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 558189 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 558189 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 558189' 00:06:52.700 killing process with pid 558189 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 558189 00:06:52.700 07:16:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 558189 00:06:53.269 00:06:53.269 real 0m1.432s 00:06:53.269 user 0m3.974s 00:06:53.269 sys 0m0.384s 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.269 ************************************ 00:06:53.269 END TEST locking_overlapped_coremask 00:06:53.269 ************************************ 00:06:53.269 07:16:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:53.269 07:16:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.269 07:16:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.269 07:16:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.269 ************************************ 00:06:53.269 START TEST locking_overlapped_coremask_via_rpc 00:06:53.269 ************************************ 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=558458 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 558458 /var/tmp/spdk.sock 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 558458 ']' 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.269 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.269 [2024-11-26 07:16:21.174176] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:53.269 [2024-11-26 07:16:21.174215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid558458 ] 00:06:53.269 [2024-11-26 07:16:21.236554] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.269 [2024-11-26 07:16:21.236578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.269 [2024-11-26 07:16:21.282333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.269 [2024-11-26 07:16:21.282429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.269 [2024-11-26 07:16:21.282431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.529 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.529 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.529 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=558466 00:06:53.529 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 558466 /var/tmp/spdk2.sock 00:06:53.529 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 558466 ']' 00:06:53.529 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.529 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.529 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.529 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.529 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:53.529 07:16:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.529 [2024-11-26 07:16:21.538594] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:53.529 [2024-11-26 07:16:21.538642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid558466 ] 00:06:53.788 [2024-11-26 07:16:21.631442] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.788 [2024-11-26 07:16:21.631466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.788 [2024-11-26 07:16:21.719406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.788 [2024-11-26 07:16:21.722998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.788 [2024-11-26 07:16:21.722999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.356 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.357 [2024-11-26 07:16:22.392021] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 558458 has claimed it. 00:06:54.357 request: 00:06:54.357 { 00:06:54.357 "method": "framework_enable_cpumask_locks", 00:06:54.357 "req_id": 1 00:06:54.357 } 00:06:54.357 Got JSON-RPC error response 00:06:54.357 response: 00:06:54.357 { 00:06:54.357 "code": -32603, 00:06:54.357 "message": "Failed to claim CPU core: 2" 00:06:54.357 } 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 558458 /var/tmp/spdk.sock 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 558458 ']' 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.357 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.616 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.616 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:54.616 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 558466 /var/tmp/spdk2.sock 00:06:54.616 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 558466 ']' 00:06:54.616 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.616 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.616 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.616 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.616 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.875 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.875 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:54.875 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:54.875 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.875 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.875 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.875 00:06:54.875 real 0m1.666s 00:06:54.875 user 0m0.806s 00:06:54.875 sys 0m0.137s 00:06:54.875 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.875 07:16:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.875 ************************************ 00:06:54.875 END TEST locking_overlapped_coremask_via_rpc 00:06:54.875 ************************************ 00:06:54.875 07:16:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:54.875 07:16:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 558458 ]] 00:06:54.875 07:16:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 558458 00:06:54.875 07:16:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 558458 ']' 00:06:54.875 07:16:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 558458 00:06:54.875 07:16:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:54.875 07:16:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.875 07:16:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 558458 00:06:54.875 07:16:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.875 07:16:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.875 07:16:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 558458' 00:06:54.876 killing process with pid 558458 00:06:54.876 07:16:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 558458 00:06:54.876 07:16:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 558458 00:06:55.135 07:16:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 558466 ]] 00:06:55.135 07:16:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 558466 00:06:55.135 07:16:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 558466 ']' 00:06:55.135 07:16:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 558466 00:06:55.135 07:16:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:55.135 07:16:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.135 07:16:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 558466 00:06:55.395 07:16:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:55.395 07:16:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:55.395 07:16:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 558466' 00:06:55.395 killing process with pid 558466 00:06:55.395 07:16:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 558466 00:06:55.395 07:16:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 558466 00:06:55.654 07:16:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.654 07:16:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:55.654 07:16:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 558458 ]] 00:06:55.654 07:16:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 558458 00:06:55.654 07:16:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 558458 ']' 00:06:55.654 07:16:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 558458 00:06:55.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (558458) - No such process 00:06:55.654 07:16:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 558458 is not found' 00:06:55.654 Process with pid 558458 is not found 00:06:55.654 07:16:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 558466 ]] 00:06:55.654 07:16:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 558466 00:06:55.654 07:16:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 558466 ']' 00:06:55.654 07:16:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 558466 00:06:55.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (558466) - No such process 00:06:55.654 07:16:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 558466 is not found' 00:06:55.654 Process with pid 558466 is not found 00:06:55.654 07:16:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.654 00:06:55.654 real 0m13.439s 00:06:55.654 user 0m23.800s 00:06:55.654 sys 0m4.649s 00:06:55.654 07:16:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.655 07:16:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.655 ************************************ 00:06:55.655 END TEST cpu_locks 00:06:55.655 ************************************ 00:06:55.655 00:06:55.655 real 0m38.262s 00:06:55.655 user 1m13.902s 00:06:55.655 sys 0m8.050s 00:06:55.655 07:16:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.655 07:16:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.655 ************************************ 00:06:55.655 END TEST event 00:06:55.655 ************************************ 00:06:55.655 07:16:23 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:55.655 07:16:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.655 07:16:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.655 07:16:23 -- common/autotest_common.sh@10 -- # set +x 00:06:55.655 ************************************ 00:06:55.655 START TEST thread 00:06:55.655 ************************************ 00:06:55.655 07:16:23 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:55.655 * Looking for test storage... 00:06:55.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:55.655 07:16:23 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.655 07:16:23 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.655 07:16:23 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.914 07:16:23 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.914 07:16:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.914 07:16:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.914 07:16:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.914 07:16:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.914 07:16:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.914 07:16:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.914 07:16:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.914 07:16:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.914 07:16:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.914 07:16:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.914 07:16:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.914 07:16:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:55.914 07:16:23 thread -- scripts/common.sh@345 -- # : 1 00:06:55.914 07:16:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.914 07:16:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.914 07:16:23 thread -- scripts/common.sh@365 -- # decimal 1 00:06:55.914 07:16:23 thread -- scripts/common.sh@353 -- # local d=1 00:06:55.914 07:16:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.914 07:16:23 thread -- scripts/common.sh@355 -- # echo 1 00:06:55.914 07:16:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.915 07:16:23 thread -- scripts/common.sh@366 -- # decimal 2 00:06:55.915 07:16:23 thread -- scripts/common.sh@353 -- # local d=2 00:06:55.915 07:16:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.915 07:16:23 thread -- scripts/common.sh@355 -- # echo 2 00:06:55.915 07:16:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.915 07:16:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.915 07:16:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.915 07:16:23 thread -- scripts/common.sh@368 -- # return 0 00:06:55.915 07:16:23 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.915 07:16:23 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.915 --rc genhtml_branch_coverage=1 00:06:55.915 --rc genhtml_function_coverage=1 00:06:55.915 --rc genhtml_legend=1 00:06:55.915 --rc geninfo_all_blocks=1 00:06:55.915 --rc geninfo_unexecuted_blocks=1 00:06:55.915 00:06:55.915 ' 00:06:55.915 07:16:23 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.915 --rc genhtml_branch_coverage=1 00:06:55.915 --rc genhtml_function_coverage=1 00:06:55.915 --rc genhtml_legend=1 00:06:55.915 --rc geninfo_all_blocks=1 00:06:55.915 --rc geninfo_unexecuted_blocks=1 00:06:55.915 00:06:55.915 ' 00:06:55.915 07:16:23 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.915 --rc genhtml_branch_coverage=1 00:06:55.915 --rc genhtml_function_coverage=1 00:06:55.915 --rc genhtml_legend=1 00:06:55.915 --rc geninfo_all_blocks=1 00:06:55.915 --rc geninfo_unexecuted_blocks=1 00:06:55.915 00:06:55.915 ' 00:06:55.915 07:16:23 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.915 --rc genhtml_branch_coverage=1 00:06:55.915 --rc genhtml_function_coverage=1 00:06:55.915 --rc genhtml_legend=1 00:06:55.915 --rc geninfo_all_blocks=1 00:06:55.915 --rc geninfo_unexecuted_blocks=1 00:06:55.915 00:06:55.915 ' 00:06:55.915 07:16:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.915 07:16:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:55.915 07:16:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.915 07:16:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.915 ************************************ 00:06:55.915 START TEST thread_poller_perf 00:06:55.915 ************************************ 00:06:55.915 07:16:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.915 [2024-11-26 07:16:23.878780] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:55.915 [2024-11-26 07:16:23.878851] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid559031 ] 00:06:55.915 [2024-11-26 07:16:23.945883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.915 [2024-11-26 07:16:23.986366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.915 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:57.294 [2024-11-26T06:16:25.394Z] ====================================== 00:06:57.294 [2024-11-26T06:16:25.394Z] busy:2309565620 (cyc) 00:06:57.294 [2024-11-26T06:16:25.394Z] total_run_count: 407000 00:06:57.294 [2024-11-26T06:16:25.394Z] tsc_hz: 2300000000 (cyc) 00:06:57.294 [2024-11-26T06:16:25.394Z] ====================================== 00:06:57.294 [2024-11-26T06:16:25.394Z] poller_cost: 5674 (cyc), 2466 (nsec) 00:06:57.294 00:06:57.294 real 0m1.178s 00:06:57.294 user 0m1.106s 00:06:57.294 sys 0m0.068s 00:06:57.294 07:16:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.294 07:16:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.294 ************************************ 00:06:57.294 END TEST thread_poller_perf 00:06:57.294 ************************************ 00:06:57.294 07:16:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.294 07:16:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:57.294 07:16:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.294 07:16:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.294 ************************************ 00:06:57.294 START TEST thread_poller_perf 00:06:57.294 ************************************ 00:06:57.294 07:16:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:57.294 [2024-11-26 07:16:25.125630] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:57.294 [2024-11-26 07:16:25.125699] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid559272 ] 00:06:57.294 [2024-11-26 07:16:25.190814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.294 [2024-11-26 07:16:25.230896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.294 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:58.231 [2024-11-26T06:16:26.331Z] ====================================== 00:06:58.231 [2024-11-26T06:16:26.331Z] busy:2301814876 (cyc) 00:06:58.231 [2024-11-26T06:16:26.331Z] total_run_count: 5404000 00:06:58.231 [2024-11-26T06:16:26.331Z] tsc_hz: 2300000000 (cyc) 00:06:58.231 [2024-11-26T06:16:26.331Z] ====================================== 00:06:58.231 [2024-11-26T06:16:26.331Z] poller_cost: 425 (cyc), 184 (nsec) 00:06:58.231 00:06:58.231 real 0m1.168s 00:06:58.231 user 0m1.104s 00:06:58.231 sys 0m0.060s 00:06:58.231 07:16:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.231 07:16:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.231 ************************************ 00:06:58.231 END TEST thread_poller_perf 00:06:58.231 ************************************ 00:06:58.231 07:16:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:58.231 00:06:58.231 real 0m2.652s 00:06:58.231 user 0m2.367s 00:06:58.231 sys 0m0.299s 00:06:58.231 07:16:26 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.231 07:16:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.231 ************************************ 00:06:58.231 END TEST thread 00:06:58.231 ************************************ 00:06:58.490 07:16:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:58.490 07:16:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:58.490 07:16:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.490 07:16:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.490 07:16:26 -- common/autotest_common.sh@10 -- # set +x 00:06:58.490 ************************************ 00:06:58.490 START TEST app_cmdline 00:06:58.490 ************************************ 00:06:58.490 07:16:26 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:58.490 * Looking for test storage... 00:06:58.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:58.490 07:16:26 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.490 07:16:26 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.490 07:16:26 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.490 07:16:26 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:58.490 07:16:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.491 07:16:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:58.491 07:16:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:58.491 07:16:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.491 07:16:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:58.491 07:16:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.491 07:16:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.491 07:16:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.491 07:16:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:58.491 07:16:26 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.491 07:16:26 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.491 --rc genhtml_branch_coverage=1 00:06:58.491 --rc genhtml_function_coverage=1 00:06:58.491 --rc genhtml_legend=1 00:06:58.491 --rc geninfo_all_blocks=1 00:06:58.491 --rc geninfo_unexecuted_blocks=1 00:06:58.491 00:06:58.491 ' 00:06:58.491 07:16:26 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.491 --rc genhtml_branch_coverage=1 00:06:58.491 --rc genhtml_function_coverage=1 00:06:58.491 --rc genhtml_legend=1 00:06:58.491 --rc geninfo_all_blocks=1 00:06:58.491 --rc geninfo_unexecuted_blocks=1 00:06:58.491 00:06:58.491 ' 00:06:58.491 07:16:26 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.491 --rc genhtml_branch_coverage=1 00:06:58.491 --rc genhtml_function_coverage=1 00:06:58.491 --rc genhtml_legend=1 00:06:58.491 --rc geninfo_all_blocks=1 00:06:58.491 --rc geninfo_unexecuted_blocks=1 00:06:58.491 00:06:58.491 ' 00:06:58.491 07:16:26 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.491 --rc genhtml_branch_coverage=1 00:06:58.491 --rc genhtml_function_coverage=1 00:06:58.491 --rc genhtml_legend=1 00:06:58.491 --rc geninfo_all_blocks=1 00:06:58.491 --rc geninfo_unexecuted_blocks=1 00:06:58.491 00:06:58.491 ' 00:06:58.491 07:16:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:58.491 07:16:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=559578 00:06:58.491 07:16:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 559578 00:06:58.491 07:16:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:58.491 07:16:26 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 559578 ']' 00:06:58.491 07:16:26 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.491 07:16:26 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.491 07:16:26 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.491 07:16:26 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.491 07:16:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.750 [2024-11-26 07:16:26.593551] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:06:58.750 [2024-11-26 07:16:26.593599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid559578 ] 00:06:58.750 [2024-11-26 07:16:26.655487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.751 [2024-11-26 07:16:26.698278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.011 07:16:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.011 07:16:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:59.011 07:16:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:59.011 { 00:06:59.011 "version": "SPDK v25.01-pre git sha1 9c7e54d62", 00:06:59.011 "fields": { 00:06:59.011 "major": 25, 00:06:59.011 "minor": 1, 00:06:59.011 "patch": 0, 00:06:59.011 "suffix": "-pre", 00:06:59.011 "commit": "9c7e54d62" 00:06:59.011 } 00:06:59.011 } 00:06:59.011 07:16:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:59.011 07:16:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:59.011 07:16:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:59.011 07:16:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:59.011 07:16:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:59.011 07:16:27 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.011 07:16:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.011 07:16:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:59.011 07:16:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:59.011 07:16:27 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.270 07:16:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:59.270 07:16:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:59.270 07:16:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.270 request: 00:06:59.270 { 00:06:59.270 "method": "env_dpdk_get_mem_stats", 00:06:59.270 "req_id": 1 00:06:59.270 } 00:06:59.270 Got JSON-RPC error response 00:06:59.270 response: 00:06:59.270 { 00:06:59.270 "code": -32601, 00:06:59.270 "message": "Method not found" 00:06:59.270 } 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.270 07:16:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 559578 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 559578 ']' 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 559578 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.270 07:16:27 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 559578 00:06:59.530 07:16:27 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.530 07:16:27 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.530 07:16:27 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 559578' 00:06:59.530 killing process with pid 559578 00:06:59.530 07:16:27 app_cmdline -- common/autotest_common.sh@973 -- # kill 559578 00:06:59.530 07:16:27 app_cmdline -- common/autotest_common.sh@978 -- # wait 559578 00:06:59.789 00:06:59.789 real 0m1.305s 00:06:59.789 user 0m1.548s 00:06:59.789 sys 0m0.409s 00:06:59.789 07:16:27 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.789 07:16:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.789 ************************************ 00:06:59.789 END TEST app_cmdline 00:06:59.789 ************************************ 00:06:59.789 07:16:27 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:59.789 07:16:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.789 07:16:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.789 07:16:27 -- common/autotest_common.sh@10 -- # set +x 00:06:59.789 ************************************ 00:06:59.789 START TEST version 00:06:59.789 ************************************ 00:06:59.789 07:16:27 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:59.789 * Looking for test storage... 00:06:59.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:59.789 07:16:27 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.789 07:16:27 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.789 07:16:27 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.049 07:16:27 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.049 07:16:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.049 07:16:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.049 07:16:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.049 07:16:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.049 07:16:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.049 07:16:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.049 07:16:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.049 07:16:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.049 07:16:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.049 07:16:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.049 07:16:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.049 07:16:27 version -- scripts/common.sh@344 -- # case "$op" in 00:07:00.049 07:16:27 version -- scripts/common.sh@345 -- # : 1 00:07:00.049 07:16:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.049 07:16:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.049 07:16:27 version -- scripts/common.sh@365 -- # decimal 1 00:07:00.049 07:16:27 version -- scripts/common.sh@353 -- # local d=1 00:07:00.049 07:16:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.049 07:16:27 version -- scripts/common.sh@355 -- # echo 1 00:07:00.049 07:16:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.049 07:16:27 version -- scripts/common.sh@366 -- # decimal 2 00:07:00.049 07:16:27 version -- scripts/common.sh@353 -- # local d=2 00:07:00.049 07:16:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.049 07:16:27 version -- scripts/common.sh@355 -- # echo 2 00:07:00.049 07:16:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.049 07:16:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.049 07:16:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.049 07:16:27 version -- scripts/common.sh@368 -- # return 0 00:07:00.049 07:16:27 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.049 07:16:27 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.049 --rc genhtml_branch_coverage=1 00:07:00.049 --rc genhtml_function_coverage=1 00:07:00.049 --rc genhtml_legend=1 00:07:00.049 --rc geninfo_all_blocks=1 00:07:00.049 --rc geninfo_unexecuted_blocks=1 00:07:00.049 00:07:00.049 ' 00:07:00.049 07:16:27 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.049 --rc genhtml_branch_coverage=1 00:07:00.049 --rc genhtml_function_coverage=1 00:07:00.049 --rc genhtml_legend=1 00:07:00.049 --rc geninfo_all_blocks=1 00:07:00.049 --rc geninfo_unexecuted_blocks=1 00:07:00.049 00:07:00.049 ' 00:07:00.049 07:16:27 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.049 --rc genhtml_branch_coverage=1 00:07:00.049 --rc genhtml_function_coverage=1 00:07:00.049 --rc genhtml_legend=1 00:07:00.049 --rc geninfo_all_blocks=1 00:07:00.049 --rc geninfo_unexecuted_blocks=1 00:07:00.049 00:07:00.049 ' 00:07:00.049 07:16:27 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.049 --rc genhtml_branch_coverage=1 00:07:00.049 --rc genhtml_function_coverage=1 00:07:00.049 --rc genhtml_legend=1 00:07:00.049 --rc geninfo_all_blocks=1 00:07:00.049 --rc geninfo_unexecuted_blocks=1 00:07:00.049 00:07:00.049 ' 00:07:00.049 07:16:27 version -- app/version.sh@17 -- # get_header_version major 00:07:00.049 07:16:27 version -- app/version.sh@14 -- # cut -f2 00:07:00.049 07:16:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.049 07:16:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.049 07:16:27 version -- app/version.sh@17 -- # major=25 00:07:00.049 07:16:27 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.049 07:16:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.050 07:16:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.050 07:16:27 version -- app/version.sh@14 -- # cut -f2 00:07:00.050 07:16:27 version -- app/version.sh@18 -- # minor=1 00:07:00.050 07:16:27 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.050 07:16:27 version -- app/version.sh@14 -- # cut -f2 00:07:00.050 07:16:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.050 07:16:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.050 07:16:27 version -- app/version.sh@19 -- # patch=0 00:07:00.050 07:16:27 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.050 07:16:27 version -- app/version.sh@14 -- # cut -f2 00:07:00.050 07:16:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.050 07:16:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.050 07:16:27 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.050 07:16:27 version -- app/version.sh@22 -- # version=25.1 00:07:00.050 07:16:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.050 07:16:27 version -- app/version.sh@28 -- # version=25.1rc0 00:07:00.050 07:16:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:00.050 07:16:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:00.050 07:16:27 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:00.050 07:16:27 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:00.050 00:07:00.050 real 0m0.247s 00:07:00.050 user 0m0.153s 00:07:00.050 sys 0m0.132s 00:07:00.050 07:16:27 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.050 07:16:27 version -- common/autotest_common.sh@10 -- # set +x 00:07:00.050 ************************************ 00:07:00.050 END TEST version 00:07:00.050 ************************************ 00:07:00.050 07:16:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:00.050 07:16:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:00.050 07:16:28 -- spdk/autotest.sh@194 -- # uname -s 00:07:00.050 07:16:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:00.050 07:16:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:00.050 07:16:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:00.050 07:16:28 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:00.050 07:16:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:00.050 07:16:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:00.050 07:16:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:00.050 07:16:28 -- common/autotest_common.sh@10 -- # set +x 00:07:00.050 07:16:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:00.050 07:16:28 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:00.050 07:16:28 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:00.050 07:16:28 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:00.050 07:16:28 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:00.050 07:16:28 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:00.050 07:16:28 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:00.050 07:16:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.050 07:16:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.050 07:16:28 -- common/autotest_common.sh@10 -- # set +x 00:07:00.050 ************************************ 00:07:00.050 START TEST nvmf_tcp 00:07:00.050 ************************************ 00:07:00.050 07:16:28 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:00.310 * Looking for test storage... 00:07:00.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.310 07:16:28 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.310 --rc genhtml_branch_coverage=1 00:07:00.310 --rc genhtml_function_coverage=1 00:07:00.310 --rc genhtml_legend=1 00:07:00.310 --rc geninfo_all_blocks=1 00:07:00.310 --rc geninfo_unexecuted_blocks=1 00:07:00.310 00:07:00.310 ' 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.310 --rc genhtml_branch_coverage=1 00:07:00.310 --rc genhtml_function_coverage=1 00:07:00.310 --rc genhtml_legend=1 00:07:00.310 --rc geninfo_all_blocks=1 00:07:00.310 --rc geninfo_unexecuted_blocks=1 00:07:00.310 00:07:00.310 ' 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.310 --rc genhtml_branch_coverage=1 00:07:00.310 --rc genhtml_function_coverage=1 00:07:00.310 --rc genhtml_legend=1 00:07:00.310 --rc geninfo_all_blocks=1 00:07:00.310 --rc geninfo_unexecuted_blocks=1 00:07:00.310 00:07:00.310 ' 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.310 --rc genhtml_branch_coverage=1 00:07:00.310 --rc genhtml_function_coverage=1 00:07:00.310 --rc genhtml_legend=1 00:07:00.310 --rc geninfo_all_blocks=1 00:07:00.310 --rc geninfo_unexecuted_blocks=1 00:07:00.310 00:07:00.310 ' 00:07:00.310 07:16:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:00.310 07:16:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:00.310 07:16:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.310 07:16:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.310 ************************************ 00:07:00.310 START TEST nvmf_target_core 00:07:00.310 ************************************ 00:07:00.310 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:00.310 * Looking for test storage... 00:07:00.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:00.310 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.310 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.310 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.570 --rc genhtml_branch_coverage=1 00:07:00.570 --rc genhtml_function_coverage=1 00:07:00.570 --rc genhtml_legend=1 00:07:00.570 --rc geninfo_all_blocks=1 00:07:00.570 --rc geninfo_unexecuted_blocks=1 00:07:00.570 00:07:00.570 ' 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.570 --rc genhtml_branch_coverage=1 00:07:00.570 --rc genhtml_function_coverage=1 00:07:00.570 --rc genhtml_legend=1 00:07:00.570 --rc geninfo_all_blocks=1 00:07:00.570 --rc geninfo_unexecuted_blocks=1 00:07:00.570 00:07:00.570 ' 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.570 --rc genhtml_branch_coverage=1 00:07:00.570 --rc genhtml_function_coverage=1 00:07:00.570 --rc genhtml_legend=1 00:07:00.570 --rc geninfo_all_blocks=1 00:07:00.570 --rc geninfo_unexecuted_blocks=1 00:07:00.570 00:07:00.570 ' 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.570 --rc genhtml_branch_coverage=1 00:07:00.570 --rc genhtml_function_coverage=1 00:07:00.570 --rc genhtml_legend=1 00:07:00.570 --rc geninfo_all_blocks=1 00:07:00.570 --rc geninfo_unexecuted_blocks=1 00:07:00.570 00:07:00.570 ' 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.570 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.571 ************************************ 00:07:00.571 START TEST nvmf_abort 00:07:00.571 ************************************ 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:00.571 * Looking for test storage... 00:07:00.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.571 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.831 --rc genhtml_branch_coverage=1 00:07:00.831 --rc genhtml_function_coverage=1 00:07:00.831 --rc genhtml_legend=1 00:07:00.831 --rc geninfo_all_blocks=1 00:07:00.831 --rc geninfo_unexecuted_blocks=1 00:07:00.831 00:07:00.831 ' 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.831 --rc genhtml_branch_coverage=1 00:07:00.831 --rc genhtml_function_coverage=1 00:07:00.831 --rc genhtml_legend=1 00:07:00.831 --rc geninfo_all_blocks=1 00:07:00.831 --rc geninfo_unexecuted_blocks=1 00:07:00.831 00:07:00.831 ' 00:07:00.831 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.831 --rc genhtml_branch_coverage=1 00:07:00.831 --rc genhtml_function_coverage=1 00:07:00.831 --rc genhtml_legend=1 00:07:00.831 --rc geninfo_all_blocks=1 00:07:00.831 --rc geninfo_unexecuted_blocks=1 00:07:00.831 00:07:00.832 ' 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.832 --rc genhtml_branch_coverage=1 00:07:00.832 --rc genhtml_function_coverage=1 00:07:00.832 --rc genhtml_legend=1 00:07:00.832 --rc geninfo_all_blocks=1 00:07:00.832 --rc geninfo_unexecuted_blocks=1 00:07:00.832 00:07:00.832 ' 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:00.832 07:16:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.122 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.122 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.122 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.122 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.122 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.122 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.122 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.122 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.122 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.122 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:06.123 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:06.123 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:06.123 Found net devices under 0000:86:00.0: cvl_0_0 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:06.123 Found net devices under 0000:86:00.1: cvl_0_1 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.123 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:07:06.383 00:07:06.383 --- 10.0.0.2 ping statistics --- 00:07:06.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.383 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:07:06.383 00:07:06.383 --- 10.0.0.1 ping statistics --- 00:07:06.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.383 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.383 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.384 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.384 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=563044 00:07:06.384 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 563044 00:07:06.384 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 563044 ']' 00:07:06.384 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.384 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.384 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.384 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:06.384 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.384 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.384 [2024-11-26 07:16:34.407573] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:07:06.384 [2024-11-26 07:16:34.407618] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.384 [2024-11-26 07:16:34.473952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.644 [2024-11-26 07:16:34.516329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.644 [2024-11-26 07:16:34.516366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.644 [2024-11-26 07:16:34.516373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.644 [2024-11-26 07:16:34.516379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.644 [2024-11-26 07:16:34.516384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.644 [2024-11-26 07:16:34.517842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.644 [2024-11-26 07:16:34.517932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.644 [2024-11-26 07:16:34.517934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 [2024-11-26 07:16:34.654290] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 Malloc0 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 Delay0 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 [2024-11-26 07:16:34.730033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.644 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.645 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:06.645 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.645 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.903 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.903 07:16:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:06.903 [2024-11-26 07:16:34.888098] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:09.439 Initializing NVMe Controllers 00:07:09.439 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:09.439 controller IO queue size 128 less than required 00:07:09.439 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:09.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:09.439 Initialization complete. Launching workers. 00:07:09.439 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36689 00:07:09.439 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36750, failed to submit 62 00:07:09.439 success 36693, unsuccessful 57, failed 0 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:09.439 rmmod nvme_tcp 00:07:09.439 rmmod nvme_fabrics 00:07:09.439 rmmod nvme_keyring 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:09.439 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 563044 ']' 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 563044 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 563044 ']' 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 563044 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 563044 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 563044' 00:07:09.440 killing process with pid 563044 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 563044 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 563044 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.440 07:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.351 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.351 00:07:11.351 real 0m10.887s 00:07:11.351 user 0m11.858s 00:07:11.351 sys 0m5.153s 00:07:11.351 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.351 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:11.351 ************************************ 00:07:11.351 END TEST nvmf_abort 00:07:11.351 ************************************ 00:07:11.611 07:16:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:11.611 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.611 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.611 07:16:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.611 ************************************ 00:07:11.611 START TEST nvmf_ns_hotplug_stress 00:07:11.611 ************************************ 00:07:11.611 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:11.611 * Looking for test storage... 00:07:11.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.611 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:11.611 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:11.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.612 --rc genhtml_branch_coverage=1 00:07:11.612 --rc genhtml_function_coverage=1 00:07:11.612 --rc genhtml_legend=1 00:07:11.612 --rc geninfo_all_blocks=1 00:07:11.612 --rc geninfo_unexecuted_blocks=1 00:07:11.612 00:07:11.612 ' 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:11.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.612 --rc genhtml_branch_coverage=1 00:07:11.612 --rc genhtml_function_coverage=1 00:07:11.612 --rc genhtml_legend=1 00:07:11.612 --rc geninfo_all_blocks=1 00:07:11.612 --rc geninfo_unexecuted_blocks=1 00:07:11.612 00:07:11.612 ' 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.612 --rc genhtml_branch_coverage=1 00:07:11.612 --rc genhtml_function_coverage=1 00:07:11.612 --rc genhtml_legend=1 00:07:11.612 --rc geninfo_all_blocks=1 00:07:11.612 --rc geninfo_unexecuted_blocks=1 00:07:11.612 00:07:11.612 ' 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.612 --rc genhtml_branch_coverage=1 00:07:11.612 --rc genhtml_function_coverage=1 00:07:11.612 --rc genhtml_legend=1 00:07:11.612 --rc geninfo_all_blocks=1 00:07:11.612 --rc geninfo_unexecuted_blocks=1 00:07:11.612 00:07:11.612 ' 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.612 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.613 07:16:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.890 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:16.891 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:16.891 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:16.891 Found net devices under 0000:86:00.0: cvl_0_0 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:16.891 Found net devices under 0000:86:00.1: cvl_0_1 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:16.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:07:16.891 00:07:16.891 --- 10.0.0.2 ping statistics --- 00:07:16.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.891 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:07:16.891 00:07:16.891 --- 10.0.0.1 ping statistics --- 00:07:16.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.891 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:16.891 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=567048 00:07:16.892 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:16.892 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 567048 00:07:16.892 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 567048 ']' 00:07:16.892 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.892 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.892 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.892 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.892 07:16:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:16.892 [2024-11-26 07:16:44.869966] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:07:16.892 [2024-11-26 07:16:44.870014] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.892 [2024-11-26 07:16:44.937988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.892 [2024-11-26 07:16:44.978282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.892 [2024-11-26 07:16:44.978318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.892 [2024-11-26 07:16:44.978326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.892 [2024-11-26 07:16:44.978332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.892 [2024-11-26 07:16:44.978337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.892 [2024-11-26 07:16:44.979780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.892 [2024-11-26 07:16:44.979865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.892 [2024-11-26 07:16:44.979867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.152 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.152 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:17.152 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.152 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.152 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:17.152 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.152 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:17.152 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:17.410 [2024-11-26 07:16:45.292376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.410 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:17.668 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.668 [2024-11-26 07:16:45.705924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.668 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.928 07:16:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:18.187 Malloc0 00:07:18.187 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:18.446 Delay0 00:07:18.446 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.446 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:18.704 NULL1 00:07:18.704 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:18.963 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=567407 00:07:18.963 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:18.963 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:18.963 07:16:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.340 Read completed with error (sct=0, sc=11) 00:07:20.340 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.340 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:20.340 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:20.599 true 00:07:20.599 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:20.599 07:16:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.535 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.535 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:21.535 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:21.794 true 00:07:21.794 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:21.794 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.052 07:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.310 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:22.310 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:22.310 true 00:07:22.310 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:22.310 07:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.687 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.687 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:23.687 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:23.687 true 00:07:23.687 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:23.687 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.946 07:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.205 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:24.205 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:24.465 true 00:07:24.465 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:24.465 07:16:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.402 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.662 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:25.662 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:25.921 true 00:07:25.921 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:25.921 07:16:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.857 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.858 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:26.858 07:16:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:27.116 true 00:07:27.116 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:27.116 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.376 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.376 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:27.376 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:27.636 true 00:07:27.636 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:27.636 07:16:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.649 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.950 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:28.950 07:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:28.950 true 00:07:28.950 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:28.950 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.289 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.572 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:29.572 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:29.572 true 00:07:29.572 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:29.572 07:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.838 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.838 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:30.838 07:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:31.120 true 00:07:31.120 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:31.120 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.116 07:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.116 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:32.116 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:32.421 true 00:07:32.421 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:32.421 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.421 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.718 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:32.718 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:32.989 true 00:07:32.989 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:32.989 07:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.987 07:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.259 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:34.259 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:34.259 true 00:07:34.259 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:34.259 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.518 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.778 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:34.778 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:35.037 true 00:07:35.037 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:35.037 07:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.976 07:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.236 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:36.236 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:36.236 true 00:07:36.236 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:36.236 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.495 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.755 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:36.755 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:37.014 true 00:07:37.014 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:37.014 07:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.952 07:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.212 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:38.212 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:38.471 true 00:07:38.471 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:38.471 07:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.409 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.409 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:39.409 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:39.668 true 00:07:39.668 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:39.668 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.927 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.927 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:39.927 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:40.187 true 00:07:40.187 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:40.187 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.565 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.565 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:41.565 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:41.823 true 00:07:41.823 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:41.823 07:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.760 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.760 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:42.761 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:43.020 true 00:07:43.020 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:43.020 07:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.280 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.280 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:43.280 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:43.539 true 00:07:43.539 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:43.539 07:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.477 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.736 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:44.736 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:44.995 true 00:07:44.995 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:44.995 07:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.255 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.514 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:45.514 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:45.514 true 00:07:45.514 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:45.514 07:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.894 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.894 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:46.894 07:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:47.154 true 00:07:47.154 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:47.154 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.092 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.092 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:48.092 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:48.352 true 00:07:48.352 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:48.352 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.612 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.871 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:48.871 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:48.871 true 00:07:48.871 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:48.871 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.250 07:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.250 Initializing NVMe Controllers 00:07:50.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:50.250 Controller IO queue size 128, less than required. 00:07:50.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:50.250 Controller IO queue size 128, less than required. 00:07:50.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:50.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:50.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:50.250 Initialization complete. Launching workers. 00:07:50.250 ======================================================== 00:07:50.250 Latency(us) 00:07:50.250 Device Information : IOPS MiB/s Average min max 00:07:50.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1394.80 0.68 61406.36 2937.58 1028615.93 00:07:50.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16594.03 8.10 7713.41 1563.66 308369.22 00:07:50.250 ======================================================== 00:07:50.250 Total : 17988.83 8.78 11876.60 1563.66 1028615.93 00:07:50.250 00:07:50.250 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:50.250 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:50.250 true 00:07:50.250 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567407 00:07:50.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (567407) - No such process 00:07:50.250 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 567407 00:07:50.250 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.509 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.769 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:50.769 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:50.769 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:50.769 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.769 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:51.030 null0 00:07:51.030 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.030 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.030 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:51.030 null1 00:07:51.290 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.290 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.290 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:51.290 null2 00:07:51.290 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.290 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.290 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:51.549 null3 00:07:51.549 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.549 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.549 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:51.808 null4 00:07:51.808 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:51.808 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:51.808 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:52.068 null5 00:07:52.068 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.068 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.068 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:52.068 null6 00:07:52.068 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.068 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.068 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:52.328 null7 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.328 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 573179 573180 573182 573184 573186 573188 573190 573192 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.329 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.589 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.589 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.589 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.589 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:52.589 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.589 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.589 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.589 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.848 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.848 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.848 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.848 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.848 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.848 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.849 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.108 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.108 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.108 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.108 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.108 07:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.108 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.108 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.108 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.108 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.108 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.108 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.108 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.108 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.368 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.627 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.886 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.886 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.886 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.887 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.887 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.887 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.887 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.887 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.146 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.405 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.406 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.406 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.406 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.406 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.406 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.406 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.664 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.664 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.664 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.664 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.664 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.664 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.664 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.664 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.922 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.922 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.922 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.922 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.922 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.922 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.922 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.923 07:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.182 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.441 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.441 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.442 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.702 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:55.961 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.961 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.961 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.961 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.961 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.961 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.961 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.961 07:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.221 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.222 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.222 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.222 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.222 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.222 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.481 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.741 rmmod nvme_tcp 00:07:56.741 rmmod nvme_fabrics 00:07:56.741 rmmod nvme_keyring 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 567048 ']' 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 567048 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 567048 ']' 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 567048 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 567048 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 567048' 00:07:56.741 killing process with pid 567048 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 567048 00:07:56.741 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 567048 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.000 07:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.909 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.909 00:07:58.909 real 0m47.419s 00:07:58.909 user 3m15.547s 00:07:58.909 sys 0m14.724s 00:07:58.909 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.909 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:58.909 ************************************ 00:07:58.909 END TEST nvmf_ns_hotplug_stress 00:07:58.909 ************************************ 00:07:58.909 07:17:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:58.909 07:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.909 07:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.909 07:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.909 ************************************ 00:07:58.909 START TEST nvmf_delete_subsystem 00:07:58.909 ************************************ 00:07:58.909 07:17:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:59.170 * Looking for test storage... 00:07:59.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:59.170 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.171 --rc genhtml_branch_coverage=1 00:07:59.171 --rc genhtml_function_coverage=1 00:07:59.171 --rc genhtml_legend=1 00:07:59.171 --rc geninfo_all_blocks=1 00:07:59.171 --rc geninfo_unexecuted_blocks=1 00:07:59.171 00:07:59.171 ' 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.171 --rc genhtml_branch_coverage=1 00:07:59.171 --rc genhtml_function_coverage=1 00:07:59.171 --rc genhtml_legend=1 00:07:59.171 --rc geninfo_all_blocks=1 00:07:59.171 --rc geninfo_unexecuted_blocks=1 00:07:59.171 00:07:59.171 ' 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.171 --rc genhtml_branch_coverage=1 00:07:59.171 --rc genhtml_function_coverage=1 00:07:59.171 --rc genhtml_legend=1 00:07:59.171 --rc geninfo_all_blocks=1 00:07:59.171 --rc geninfo_unexecuted_blocks=1 00:07:59.171 00:07:59.171 ' 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.171 --rc genhtml_branch_coverage=1 00:07:59.171 --rc genhtml_function_coverage=1 00:07:59.171 --rc genhtml_legend=1 00:07:59.171 --rc geninfo_all_blocks=1 00:07:59.171 --rc geninfo_unexecuted_blocks=1 00:07:59.171 00:07:59.171 ' 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.171 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.172 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.172 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.172 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.172 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.172 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.172 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.172 07:17:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:04.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:04.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:04.447 Found net devices under 0000:86:00.0: cvl_0_0 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:04.447 Found net devices under 0000:86:00.1: cvl_0_1 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.447 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:04.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:08:04.448 00:08:04.448 --- 10.0.0.2 ping statistics --- 00:08:04.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.448 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:08:04.448 00:08:04.448 --- 10.0.0.1 ping statistics --- 00:08:04.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.448 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=577558 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 577558 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 577558 ']' 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.448 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.448 [2024-11-26 07:17:32.474527] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:08:04.448 [2024-11-26 07:17:32.474572] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.448 [2024-11-26 07:17:32.541473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.708 [2024-11-26 07:17:32.585078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.708 [2024-11-26 07:17:32.585114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.708 [2024-11-26 07:17:32.585121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.708 [2024-11-26 07:17:32.585127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.708 [2024-11-26 07:17:32.585132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.708 [2024-11-26 07:17:32.589963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.708 [2024-11-26 07:17:32.589967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.708 [2024-11-26 07:17:32.730703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.708 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.709 [2024-11-26 07:17:32.746898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.709 NULL1 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.709 Delay0 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=577590 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:04.709 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:04.968 [2024-11-26 07:17:32.831585] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:06.876 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.876 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.876 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 [2024-11-26 07:17:34.912645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f13d400d020 is same with the state(6) to be set 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 starting I/O failed: -6 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 [2024-11-26 07:17:34.913136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053680 is same with the state(6) to be set 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.876 Write completed with error (sct=0, sc=8) 00:08:06.876 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 [2024-11-26 07:17:34.913342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f13d400d680 is same with the state(6) to be set 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 Read completed with error (sct=0, sc=8) 00:08:06.877 Write completed with error (sct=0, sc=8) 00:08:06.877 [2024-11-26 07:17:34.913520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f13d4000c40 is same with the state(6) to be set 00:08:07.813 [2024-11-26 07:17:35.885117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10549a0 is same with the state(6) to be set 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 [2024-11-26 07:17:35.914644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f13d400d350 is same with the state(6) to be set 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 [2024-11-26 07:17:35.915424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10534a0 is same with the state(6) to be set 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Read completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.073 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 [2024-11-26 07:17:35.915587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053860 is same with the state(6) to be set 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Write completed with error (sct=0, sc=8) 00:08:08.074 Read completed with error (sct=0, sc=8) 00:08:08.074 [2024-11-26 07:17:35.916173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10532c0 is same with the state(6) to be set 00:08:08.074 Initializing NVMe Controllers 00:08:08.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:08.074 Controller IO queue size 128, less than required. 00:08:08.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:08.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:08.074 Initialization complete. Launching workers. 00:08:08.074 ======================================================== 00:08:08.074 Latency(us) 00:08:08.074 Device Information : IOPS MiB/s Average min max 00:08:08.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.56 0.10 943622.12 777.03 1011194.83 00:08:08.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.91 0.07 908638.27 721.87 1011895.50 00:08:08.074 ======================================================== 00:08:08.074 Total : 343.46 0.17 928556.82 721.87 1011895.50 00:08:08.074 00:08:08.074 [2024-11-26 07:17:35.916741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10549a0 (9): Bad file descriptor 00:08:08.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:08.074 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.074 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:08.074 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 577590 00:08:08.074 07:17:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 577590 00:08:08.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (577590) - No such process 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 577590 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 577590 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 577590 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.333 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.592 [2024-11-26 07:17:36.442177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=578277 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 578277 00:08:08.592 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:08.592 [2024-11-26 07:17:36.515911] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:09.161 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.161 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 578277 00:08:09.161 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.420 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.420 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 578277 00:08:09.420 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:09.988 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:09.988 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 578277 00:08:09.988 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:10.557 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:10.557 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 578277 00:08:10.557 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:11.126 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.126 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 578277 00:08:11.126 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:11.692 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.692 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 578277 00:08:11.692 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:11.692 Initializing NVMe Controllers 00:08:11.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.692 Controller IO queue size 128, less than required. 00:08:11.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:11.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:11.692 Initialization complete. Launching workers. 00:08:11.692 ======================================================== 00:08:11.692 Latency(us) 00:08:11.692 Device Information : IOPS MiB/s Average min max 00:08:11.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003304.96 1000144.30 1011701.28 00:08:11.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005458.51 1000194.31 1043207.44 00:08:11.693 ======================================================== 00:08:11.693 Total : 256.00 0.12 1004381.74 1000144.30 1043207.44 00:08:11.693 00:08:11.951 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:11.951 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 578277 00:08:11.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (578277) - No such process 00:08:11.951 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 578277 00:08:11.951 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:11.951 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:11.951 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.951 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:11.951 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.951 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:11.951 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.951 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.951 rmmod nvme_tcp 00:08:11.951 rmmod nvme_fabrics 00:08:11.951 rmmod nvme_keyring 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 577558 ']' 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 577558 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 577558 ']' 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 577558 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 577558 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 577558' 00:08:12.211 killing process with pid 577558 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 577558 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 577558 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.211 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:14.750 00:08:14.750 real 0m15.356s 00:08:14.750 user 0m28.673s 00:08:14.750 sys 0m5.056s 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:14.750 ************************************ 00:08:14.750 END TEST nvmf_delete_subsystem 00:08:14.750 ************************************ 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.750 ************************************ 00:08:14.750 START TEST nvmf_host_management 00:08:14.750 ************************************ 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:14.750 * Looking for test storage... 00:08:14.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.750 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.751 --rc genhtml_branch_coverage=1 00:08:14.751 --rc genhtml_function_coverage=1 00:08:14.751 --rc genhtml_legend=1 00:08:14.751 --rc geninfo_all_blocks=1 00:08:14.751 --rc geninfo_unexecuted_blocks=1 00:08:14.751 00:08:14.751 ' 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.751 --rc genhtml_branch_coverage=1 00:08:14.751 --rc genhtml_function_coverage=1 00:08:14.751 --rc genhtml_legend=1 00:08:14.751 --rc geninfo_all_blocks=1 00:08:14.751 --rc geninfo_unexecuted_blocks=1 00:08:14.751 00:08:14.751 ' 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.751 --rc genhtml_branch_coverage=1 00:08:14.751 --rc genhtml_function_coverage=1 00:08:14.751 --rc genhtml_legend=1 00:08:14.751 --rc geninfo_all_blocks=1 00:08:14.751 --rc geninfo_unexecuted_blocks=1 00:08:14.751 00:08:14.751 ' 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.751 --rc genhtml_branch_coverage=1 00:08:14.751 --rc genhtml_function_coverage=1 00:08:14.751 --rc genhtml_legend=1 00:08:14.751 --rc geninfo_all_blocks=1 00:08:14.751 --rc geninfo_unexecuted_blocks=1 00:08:14.751 00:08:14.751 ' 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.751 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:20.032 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:20.032 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:20.032 Found net devices under 0000:86:00.0: cvl_0_0 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:20.032 Found net devices under 0000:86:00.1: cvl_0_1 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.032 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:20.033 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:20.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:08:20.033 00:08:20.033 --- 10.0.0.2 ping statistics --- 00:08:20.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.033 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:08:20.033 00:08:20.033 --- 10.0.0.1 ping statistics --- 00:08:20.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.033 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=582291 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 582291 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 582291 ']' 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.033 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.292 [2024-11-26 07:17:48.145549] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:08:20.292 [2024-11-26 07:17:48.145598] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.292 [2024-11-26 07:17:48.213044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.292 [2024-11-26 07:17:48.256826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.292 [2024-11-26 07:17:48.256865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.292 [2024-11-26 07:17:48.256872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.292 [2024-11-26 07:17:48.256878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.292 [2024-11-26 07:17:48.256883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.292 [2024-11-26 07:17:48.258382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.292 [2024-11-26 07:17:48.258467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.292 [2024-11-26 07:17:48.258596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.292 [2024-11-26 07:17:48.258597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:20.292 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.292 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:20.292 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:20.292 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.292 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.553 [2024-11-26 07:17:48.395639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.553 Malloc0 00:08:20.553 [2024-11-26 07:17:48.466395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=582354 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 582354 /var/tmp/bdevperf.sock 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 582354 ']' 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:20.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:20.553 { 00:08:20.553 "params": { 00:08:20.553 "name": "Nvme$subsystem", 00:08:20.553 "trtype": "$TEST_TRANSPORT", 00:08:20.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.553 "adrfam": "ipv4", 00:08:20.553 "trsvcid": "$NVMF_PORT", 00:08:20.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.553 "hdgst": ${hdgst:-false}, 00:08:20.553 "ddgst": ${ddgst:-false} 00:08:20.553 }, 00:08:20.553 "method": "bdev_nvme_attach_controller" 00:08:20.553 } 00:08:20.553 EOF 00:08:20.553 )") 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:20.553 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:20.553 "params": { 00:08:20.553 "name": "Nvme0", 00:08:20.553 "trtype": "tcp", 00:08:20.553 "traddr": "10.0.0.2", 00:08:20.553 "adrfam": "ipv4", 00:08:20.553 "trsvcid": "4420", 00:08:20.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:20.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:20.553 "hdgst": false, 00:08:20.553 "ddgst": false 00:08:20.553 }, 00:08:20.553 "method": "bdev_nvme_attach_controller" 00:08:20.553 }' 00:08:20.553 [2024-11-26 07:17:48.562030] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:08:20.553 [2024-11-26 07:17:48.562078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582354 ] 00:08:20.553 [2024-11-26 07:17:48.627037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.813 [2024-11-26 07:17:48.670054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.813 Running I/O for 10 seconds... 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.813 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.073 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.073 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:08:21.073 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:08:21.073 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=661 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 661 -ge 100 ']' 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.333 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.333 [2024-11-26 07:17:49.236824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa200 is same with the state(6) to be set 00:08:21.333 [2024-11-26 07:17:49.236862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa200 is same with the state(6) to be set 00:08:21.333 [2024-11-26 07:17:49.239177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.333 [2024-11-26 07:17:49.239213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.333 [2024-11-26 07:17:49.239231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.333 [2024-11-26 07:17:49.239246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:21.333 [2024-11-26 07:17:49.239259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1412500 is same with the state(6) to be set 00:08:21.333 [2024-11-26 07:17:49.239348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.333 [2024-11-26 07:17:49.239623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.333 [2024-11-26 07:17:49.239629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.239988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.239995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.334 [2024-11-26 07:17:49.240211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.334 [2024-11-26 07:17:49.240218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.335 [2024-11-26 07:17:49.240226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.335 [2024-11-26 07:17:49.240232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.335 [2024-11-26 07:17:49.240240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.335 [2024-11-26 07:17:49.240247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.335 [2024-11-26 07:17:49.240254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.335 [2024-11-26 07:17:49.240261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.335 [2024-11-26 07:17:49.240269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.335 [2024-11-26 07:17:49.240275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.335 [2024-11-26 07:17:49.240284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.335 [2024-11-26 07:17:49.240292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.335 [2024-11-26 07:17:49.240300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.335 [2024-11-26 07:17:49.240306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.335 [2024-11-26 07:17:49.241264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:21.335 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.335 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:21.335 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.335 task offset: 98304 on job bdev=Nvme0n1 fails 00:08:21.335 00:08:21.335 Latency(us) 00:08:21.335 [2024-11-26T06:17:49.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.335 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:21.335 Job: Nvme0n1 ended in about 0.41 seconds with error 00:08:21.335 Verification LBA range: start 0x0 length 0x400 00:08:21.335 Nvme0n1 : 0.41 1887.12 117.94 157.26 0.00 30461.09 1659.77 27696.08 00:08:21.335 [2024-11-26T06:17:49.435Z] =================================================================================================================== 00:08:21.335 [2024-11-26T06:17:49.435Z] Total : 1887.12 117.94 157.26 0.00 30461.09 1659.77 27696.08 00:08:21.335 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:21.335 [2024-11-26 07:17:49.243649] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.335 [2024-11-26 07:17:49.243672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1412500 (9): Bad file descriptor 00:08:21.335 [2024-11-26 07:17:49.248720] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:08:21.335 [2024-11-26 07:17:49.248856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:08:21.335 [2024-11-26 07:17:49.248880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:21.335 [2024-11-26 07:17:49.248895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:08:21.335 [2024-11-26 07:17:49.248903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:08:21.335 [2024-11-26 07:17:49.248910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:08:21.335 [2024-11-26 07:17:49.248916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1412500 00:08:21.335 [2024-11-26 07:17:49.248936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1412500 (9): Bad file descriptor 00:08:21.335 [2024-11-26 07:17:49.248952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:08:21.335 [2024-11-26 07:17:49.248960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:08:21.335 [2024-11-26 07:17:49.248969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:08:21.335 [2024-11-26 07:17:49.248977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:08:21.335 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.335 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:22.273 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 582354 00:08:22.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (582354) - No such process 00:08:22.273 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:22.273 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:22.273 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:22.273 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:22.273 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:22.273 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:22.273 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:22.273 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:22.273 { 00:08:22.273 "params": { 00:08:22.273 "name": "Nvme$subsystem", 00:08:22.273 "trtype": "$TEST_TRANSPORT", 00:08:22.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.273 "adrfam": "ipv4", 00:08:22.273 "trsvcid": "$NVMF_PORT", 00:08:22.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.274 "hdgst": ${hdgst:-false}, 00:08:22.274 "ddgst": ${ddgst:-false} 00:08:22.274 }, 00:08:22.274 "method": "bdev_nvme_attach_controller" 00:08:22.274 } 00:08:22.274 EOF 00:08:22.274 )") 00:08:22.274 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:22.274 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:22.274 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:22.274 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:22.274 "params": { 00:08:22.274 "name": "Nvme0", 00:08:22.274 "trtype": "tcp", 00:08:22.274 "traddr": "10.0.0.2", 00:08:22.274 "adrfam": "ipv4", 00:08:22.274 "trsvcid": "4420", 00:08:22.274 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:22.274 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:22.274 "hdgst": false, 00:08:22.274 "ddgst": false 00:08:22.274 }, 00:08:22.274 "method": "bdev_nvme_attach_controller" 00:08:22.274 }' 00:08:22.274 [2024-11-26 07:17:50.310833] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:08:22.274 [2024-11-26 07:17:50.310882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582802 ] 00:08:22.533 [2024-11-26 07:17:50.374908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.533 [2024-11-26 07:17:50.415536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.792 Running I/O for 1 seconds... 00:08:23.730 1984.00 IOPS, 124.00 MiB/s 00:08:23.730 Latency(us) 00:08:23.730 [2024-11-26T06:17:51.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.730 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:23.730 Verification LBA range: start 0x0 length 0x400 00:08:23.731 Nvme0n1 : 1.02 2001.98 125.12 0.00 0.00 31465.58 4616.01 27582.11 00:08:23.731 [2024-11-26T06:17:51.831Z] =================================================================================================================== 00:08:23.731 [2024-11-26T06:17:51.831Z] Total : 2001.98 125.12 0.00 0.00 31465.58 4616.01 27582.11 00:08:23.991 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:23.991 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:23.991 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:23.991 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:23.991 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:23.991 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:23.991 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:23.991 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.991 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:23.991 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.991 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.991 rmmod nvme_tcp 00:08:23.991 rmmod nvme_fabrics 00:08:23.991 rmmod nvme_keyring 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 582291 ']' 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 582291 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 582291 ']' 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 582291 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 582291 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 582291' 00:08:23.991 killing process with pid 582291 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 582291 00:08:23.991 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 582291 00:08:24.250 [2024-11-26 07:17:52.218862] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.251 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.791 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.791 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:26.791 00:08:26.791 real 0m11.912s 00:08:26.791 user 0m19.694s 00:08:26.791 sys 0m5.233s 00:08:26.791 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.791 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.791 ************************************ 00:08:26.791 END TEST nvmf_host_management 00:08:26.791 ************************************ 00:08:26.791 07:17:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:26.791 07:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.791 07:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.791 07:17:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.791 ************************************ 00:08:26.791 START TEST nvmf_lvol 00:08:26.791 ************************************ 00:08:26.791 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:26.791 * Looking for test storage... 00:08:26.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.791 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:26.791 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.792 --rc genhtml_branch_coverage=1 00:08:26.792 --rc genhtml_function_coverage=1 00:08:26.792 --rc genhtml_legend=1 00:08:26.792 --rc geninfo_all_blocks=1 00:08:26.792 --rc geninfo_unexecuted_blocks=1 00:08:26.792 00:08:26.792 ' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.792 --rc genhtml_branch_coverage=1 00:08:26.792 --rc genhtml_function_coverage=1 00:08:26.792 --rc genhtml_legend=1 00:08:26.792 --rc geninfo_all_blocks=1 00:08:26.792 --rc geninfo_unexecuted_blocks=1 00:08:26.792 00:08:26.792 ' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.792 --rc genhtml_branch_coverage=1 00:08:26.792 --rc genhtml_function_coverage=1 00:08:26.792 --rc genhtml_legend=1 00:08:26.792 --rc geninfo_all_blocks=1 00:08:26.792 --rc geninfo_unexecuted_blocks=1 00:08:26.792 00:08:26.792 ' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.792 --rc genhtml_branch_coverage=1 00:08:26.792 --rc genhtml_function_coverage=1 00:08:26.792 --rc genhtml_legend=1 00:08:26.792 --rc geninfo_all_blocks=1 00:08:26.792 --rc geninfo_unexecuted_blocks=1 00:08:26.792 00:08:26.792 ' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.792 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.793 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.793 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.793 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.793 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.793 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.793 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.793 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:26.793 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:26.793 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:26.793 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:32.070 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:32.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.070 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:32.071 Found net devices under 0000:86:00.0: cvl_0_0 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:32.071 Found net devices under 0000:86:00.1: cvl_0_1 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.071 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:08:32.071 00:08:32.071 --- 10.0.0.2 ping statistics --- 00:08:32.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.071 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:08:32.071 00:08:32.071 --- 10.0.0.1 ping statistics --- 00:08:32.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.071 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:32.071 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=586588 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 586588 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 586588 ']' 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.331 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.331 [2024-11-26 07:18:00.228693] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:08:32.331 [2024-11-26 07:18:00.228740] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.331 [2024-11-26 07:18:00.294911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.331 [2024-11-26 07:18:00.336934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.331 [2024-11-26 07:18:00.336977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.331 [2024-11-26 07:18:00.336983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.331 [2024-11-26 07:18:00.336989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.331 [2024-11-26 07:18:00.336994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.331 [2024-11-26 07:18:00.338375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.331 [2024-11-26 07:18:00.338473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.331 [2024-11-26 07:18:00.338474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.591 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.591 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:32.591 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:32.591 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:32.591 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:32.591 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.591 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.591 [2024-11-26 07:18:00.647425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.591 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:32.851 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:32.851 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:33.110 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:33.110 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:33.368 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:33.628 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3988964a-ea46-46ea-bd98-035b78baa7fc 00:08:33.628 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3988964a-ea46-46ea-bd98-035b78baa7fc lvol 20 00:08:33.628 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7e0c89fc-db06-4e75-ad58-109dac935854 00:08:33.628 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.887 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e0c89fc-db06-4e75-ad58-109dac935854 00:08:34.147 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:34.407 [2024-11-26 07:18:02.292929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.407 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:34.666 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:34.666 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=587089 00:08:34.666 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:35.606 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7e0c89fc-db06-4e75-ad58-109dac935854 MY_SNAPSHOT 00:08:35.866 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1ef7334d-05f5-4270-a34c-a5e2cb703909 00:08:35.866 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7e0c89fc-db06-4e75-ad58-109dac935854 30 00:08:36.125 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1ef7334d-05f5-4270-a34c-a5e2cb703909 MY_CLONE 00:08:36.384 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=aa5ef295-b50e-4b58-acde-26d1f546c7b5 00:08:36.384 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate aa5ef295-b50e-4b58-acde-26d1f546c7b5 00:08:36.953 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 587089 00:08:45.078 Initializing NVMe Controllers 00:08:45.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:45.078 Controller IO queue size 128, less than required. 00:08:45.078 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:45.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:45.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:45.078 Initialization complete. Launching workers. 00:08:45.078 ======================================================== 00:08:45.078 Latency(us) 00:08:45.078 Device Information : IOPS MiB/s Average min max 00:08:45.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12020.40 46.95 10654.21 1868.78 63979.18 00:08:45.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11931.60 46.61 10730.38 3477.89 60186.11 00:08:45.078 ======================================================== 00:08:45.078 Total : 23952.00 93.56 10692.16 1868.78 63979.18 00:08:45.078 00:08:45.078 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.078 07:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e0c89fc-db06-4e75-ad58-109dac935854 00:08:45.337 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3988964a-ea46-46ea-bd98-035b78baa7fc 00:08:45.337 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:45.337 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:45.337 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:45.337 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:45.337 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:45.337 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.337 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:45.337 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.337 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.337 rmmod nvme_tcp 00:08:45.337 rmmod nvme_fabrics 00:08:45.337 rmmod nvme_keyring 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 586588 ']' 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 586588 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 586588 ']' 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 586588 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 586588 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 586588' 00:08:45.596 killing process with pid 586588 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 586588 00:08:45.596 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 586588 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.855 07:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.764 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.764 00:08:47.764 real 0m21.380s 00:08:47.764 user 1m2.543s 00:08:47.764 sys 0m7.343s 00:08:47.764 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.764 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:47.764 ************************************ 00:08:47.764 END TEST nvmf_lvol 00:08:47.764 ************************************ 00:08:47.764 07:18:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:47.764 07:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.764 07:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.764 07:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.764 ************************************ 00:08:47.764 START TEST nvmf_lvs_grow 00:08:47.764 ************************************ 00:08:47.764 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:48.025 * Looking for test storage... 00:08:48.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.025 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:48.025 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:48.025 07:18:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:48.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.025 --rc genhtml_branch_coverage=1 00:08:48.025 --rc genhtml_function_coverage=1 00:08:48.025 --rc genhtml_legend=1 00:08:48.025 --rc geninfo_all_blocks=1 00:08:48.025 --rc geninfo_unexecuted_blocks=1 00:08:48.025 00:08:48.025 ' 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:48.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.025 --rc genhtml_branch_coverage=1 00:08:48.025 --rc genhtml_function_coverage=1 00:08:48.025 --rc genhtml_legend=1 00:08:48.025 --rc geninfo_all_blocks=1 00:08:48.025 --rc geninfo_unexecuted_blocks=1 00:08:48.025 00:08:48.025 ' 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:48.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.025 --rc genhtml_branch_coverage=1 00:08:48.025 --rc genhtml_function_coverage=1 00:08:48.025 --rc genhtml_legend=1 00:08:48.025 --rc geninfo_all_blocks=1 00:08:48.025 --rc geninfo_unexecuted_blocks=1 00:08:48.025 00:08:48.025 ' 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:48.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.025 --rc genhtml_branch_coverage=1 00:08:48.025 --rc genhtml_function_coverage=1 00:08:48.025 --rc genhtml_legend=1 00:08:48.025 --rc geninfo_all_blocks=1 00:08:48.025 --rc geninfo_unexecuted_blocks=1 00:08:48.025 00:08:48.025 ' 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.025 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:48.026 07:18:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:53.308 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:53.308 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:53.308 Found net devices under 0000:86:00.0: cvl_0_0 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.308 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:53.309 Found net devices under 0000:86:00.1: cvl_0_1 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.309 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.568 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:08:53.569 00:08:53.569 --- 10.0.0.2 ping statistics --- 00:08:53.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.569 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:08:53.569 00:08:53.569 --- 10.0.0.1 ping statistics --- 00:08:53.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.569 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=592744 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 592744 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 592744 ']' 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.569 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.569 [2024-11-26 07:18:21.571762] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:08:53.569 [2024-11-26 07:18:21.571803] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.569 [2024-11-26 07:18:21.637982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.828 [2024-11-26 07:18:21.683504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.828 [2024-11-26 07:18:21.683538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.829 [2024-11-26 07:18:21.683545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.829 [2024-11-26 07:18:21.683551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.829 [2024-11-26 07:18:21.683556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.829 [2024-11-26 07:18:21.684125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.829 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.829 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:53.829 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.829 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.829 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:53.829 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.829 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:54.088 [2024-11-26 07:18:21.984391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.088 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:54.088 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.088 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.088 07:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 ************************************ 00:08:54.088 START TEST lvs_grow_clean 00:08:54.088 ************************************ 00:08:54.088 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:54.088 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:54.088 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:54.088 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:54.088 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:54.088 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:54.088 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:54.088 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:54.088 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:54.088 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.347 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:54.347 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:54.347 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0dd42ef9-3ffe-4313-beb7-9253251edf41 00:08:54.347 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 00:08:54.347 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:54.606 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:54.606 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:54.606 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 lvol 150 00:08:54.866 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1e52e41d-4e44-42c9-abfa-34457d5e6935 00:08:54.866 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:54.866 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:55.125 [2024-11-26 07:18:22.985743] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:55.125 [2024-11-26 07:18:22.985792] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:55.125 true 00:08:55.125 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 00:08:55.125 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:55.125 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:55.125 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:55.384 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1e52e41d-4e44-42c9-abfa-34457d5e6935 00:08:55.644 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:55.644 [2024-11-26 07:18:23.711956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.644 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.903 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=593246 00:08:55.903 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:55.903 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 593246 /var/tmp/bdevperf.sock 00:08:55.903 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:55.903 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 593246 ']' 00:08:55.903 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:55.903 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.903 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:55.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:55.903 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.903 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:55.903 [2024-11-26 07:18:23.947722] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:08:55.903 [2024-11-26 07:18:23.947770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593246 ] 00:08:56.163 [2024-11-26 07:18:24.009980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.163 [2024-11-26 07:18:24.050448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.163 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.163 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:56.163 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:56.423 Nvme0n1 00:08:56.423 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:56.684 [ 00:08:56.684 { 00:08:56.684 "name": "Nvme0n1", 00:08:56.684 "aliases": [ 00:08:56.684 "1e52e41d-4e44-42c9-abfa-34457d5e6935" 00:08:56.684 ], 00:08:56.684 "product_name": "NVMe disk", 00:08:56.684 "block_size": 4096, 00:08:56.684 "num_blocks": 38912, 00:08:56.684 "uuid": "1e52e41d-4e44-42c9-abfa-34457d5e6935", 00:08:56.684 "numa_id": 1, 00:08:56.684 "assigned_rate_limits": { 00:08:56.684 "rw_ios_per_sec": 0, 00:08:56.684 "rw_mbytes_per_sec": 0, 00:08:56.684 "r_mbytes_per_sec": 0, 00:08:56.684 "w_mbytes_per_sec": 0 00:08:56.684 }, 00:08:56.684 "claimed": false, 00:08:56.684 "zoned": false, 00:08:56.684 "supported_io_types": { 00:08:56.684 "read": true, 00:08:56.684 "write": true, 00:08:56.684 "unmap": true, 00:08:56.684 "flush": true, 00:08:56.684 "reset": true, 00:08:56.684 "nvme_admin": true, 00:08:56.684 "nvme_io": true, 00:08:56.684 "nvme_io_md": false, 00:08:56.684 "write_zeroes": true, 00:08:56.684 "zcopy": false, 00:08:56.684 "get_zone_info": false, 00:08:56.684 "zone_management": false, 00:08:56.684 "zone_append": false, 00:08:56.684 "compare": true, 00:08:56.684 "compare_and_write": true, 00:08:56.684 "abort": true, 00:08:56.684 "seek_hole": false, 00:08:56.684 "seek_data": false, 00:08:56.684 "copy": true, 00:08:56.684 "nvme_iov_md": false 00:08:56.684 }, 00:08:56.684 "memory_domains": [ 00:08:56.684 { 00:08:56.684 "dma_device_id": "system", 00:08:56.684 "dma_device_type": 1 00:08:56.684 } 00:08:56.684 ], 00:08:56.684 "driver_specific": { 00:08:56.684 "nvme": [ 00:08:56.684 { 00:08:56.684 "trid": { 00:08:56.684 "trtype": "TCP", 00:08:56.684 "adrfam": "IPv4", 00:08:56.684 "traddr": "10.0.0.2", 00:08:56.684 "trsvcid": "4420", 00:08:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:56.684 }, 00:08:56.684 "ctrlr_data": { 00:08:56.684 "cntlid": 1, 00:08:56.684 "vendor_id": "0x8086", 00:08:56.684 "model_number": "SPDK bdev Controller", 00:08:56.684 "serial_number": "SPDK0", 00:08:56.684 "firmware_revision": "25.01", 00:08:56.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:56.684 "oacs": { 00:08:56.684 "security": 0, 00:08:56.684 "format": 0, 00:08:56.684 "firmware": 0, 00:08:56.684 "ns_manage": 0 00:08:56.684 }, 00:08:56.684 "multi_ctrlr": true, 00:08:56.684 "ana_reporting": false 00:08:56.684 }, 00:08:56.684 "vs": { 00:08:56.684 "nvme_version": "1.3" 00:08:56.684 }, 00:08:56.684 "ns_data": { 00:08:56.684 "id": 1, 00:08:56.684 "can_share": true 00:08:56.684 } 00:08:56.684 } 00:08:56.684 ], 00:08:56.684 "mp_policy": "active_passive" 00:08:56.684 } 00:08:56.684 } 00:08:56.684 ] 00:08:56.684 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=593370 00:08:56.684 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:56.684 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:56.684 Running I/O for 10 seconds... 00:08:58.065 Latency(us) 00:08:58.065 [2024-11-26T06:18:26.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.065 Nvme0n1 : 1.00 22950.00 89.65 0.00 0.00 0.00 0.00 0.00 00:08:58.065 [2024-11-26T06:18:26.165Z] =================================================================================================================== 00:08:58.065 [2024-11-26T06:18:26.165Z] Total : 22950.00 89.65 0.00 0.00 0.00 0.00 0.00 00:08:58.065 00:08:58.634 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 00:08:58.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.893 Nvme0n1 : 2.00 23058.50 90.07 0.00 0.00 0.00 0.00 0.00 00:08:58.893 [2024-11-26T06:18:26.993Z] =================================================================================================================== 00:08:58.893 [2024-11-26T06:18:26.993Z] Total : 23058.50 90.07 0.00 0.00 0.00 0.00 0.00 00:08:58.893 00:08:58.893 true 00:08:58.893 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 00:08:58.893 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:59.152 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:59.152 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:59.152 07:18:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 593370 00:08:59.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.723 Nvme0n1 : 3.00 23098.33 90.23 0.00 0.00 0.00 0.00 0.00 00:08:59.723 [2024-11-26T06:18:27.823Z] =================================================================================================================== 00:08:59.723 [2024-11-26T06:18:27.823Z] Total : 23098.33 90.23 0.00 0.00 0.00 0.00 0.00 00:08:59.723 00:09:00.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.661 Nvme0n1 : 4.00 23169.00 90.50 0.00 0.00 0.00 0.00 0.00 00:09:00.661 [2024-11-26T06:18:28.761Z] =================================================================================================================== 00:09:00.661 [2024-11-26T06:18:28.761Z] Total : 23169.00 90.50 0.00 0.00 0.00 0.00 0.00 00:09:00.661 00:09:02.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.038 Nvme0n1 : 5.00 23198.80 90.62 0.00 0.00 0.00 0.00 0.00 00:09:02.038 [2024-11-26T06:18:30.138Z] =================================================================================================================== 00:09:02.038 [2024-11-26T06:18:30.138Z] Total : 23198.80 90.62 0.00 0.00 0.00 0.00 0.00 00:09:02.038 00:09:02.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.978 Nvme0n1 : 6.00 23216.67 90.69 0.00 0.00 0.00 0.00 0.00 00:09:02.978 [2024-11-26T06:18:31.078Z] =================================================================================================================== 00:09:02.978 [2024-11-26T06:18:31.078Z] Total : 23216.67 90.69 0.00 0.00 0.00 0.00 0.00 00:09:02.978 00:09:03.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.918 Nvme0n1 : 7.00 23244.57 90.80 0.00 0.00 0.00 0.00 0.00 00:09:03.918 [2024-11-26T06:18:32.018Z] =================================================================================================================== 00:09:03.918 [2024-11-26T06:18:32.018Z] Total : 23244.57 90.80 0.00 0.00 0.00 0.00 0.00 00:09:03.918 00:09:04.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.857 Nvme0n1 : 8.00 23263.38 90.87 0.00 0.00 0.00 0.00 0.00 00:09:04.857 [2024-11-26T06:18:32.957Z] =================================================================================================================== 00:09:04.857 [2024-11-26T06:18:32.957Z] Total : 23263.38 90.87 0.00 0.00 0.00 0.00 0.00 00:09:04.857 00:09:05.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.794 Nvme0n1 : 9.00 23280.78 90.94 0.00 0.00 0.00 0.00 0.00 00:09:05.794 [2024-11-26T06:18:33.894Z] =================================================================================================================== 00:09:05.794 [2024-11-26T06:18:33.894Z] Total : 23280.78 90.94 0.00 0.00 0.00 0.00 0.00 00:09:05.794 00:09:06.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.732 Nvme0n1 : 10.00 23296.00 91.00 0.00 0.00 0.00 0.00 0.00 00:09:06.732 [2024-11-26T06:18:34.832Z] =================================================================================================================== 00:09:06.732 [2024-11-26T06:18:34.832Z] Total : 23296.00 91.00 0.00 0.00 0.00 0.00 0.00 00:09:06.732 00:09:06.732 00:09:06.732 Latency(us) 00:09:06.732 [2024-11-26T06:18:34.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.732 Nvme0n1 : 10.00 23299.44 91.01 0.00 0.00 5490.66 1495.93 9915.88 00:09:06.732 [2024-11-26T06:18:34.832Z] =================================================================================================================== 00:09:06.732 [2024-11-26T06:18:34.832Z] Total : 23299.44 91.01 0.00 0.00 5490.66 1495.93 9915.88 00:09:06.732 { 00:09:06.732 "results": [ 00:09:06.732 { 00:09:06.732 "job": "Nvme0n1", 00:09:06.732 "core_mask": "0x2", 00:09:06.732 "workload": "randwrite", 00:09:06.732 "status": "finished", 00:09:06.732 "queue_depth": 128, 00:09:06.732 "io_size": 4096, 00:09:06.732 "runtime": 10.004016, 00:09:06.732 "iops": 23299.442943713806, 00:09:06.732 "mibps": 91.01344899888205, 00:09:06.732 "io_failed": 0, 00:09:06.732 "io_timeout": 0, 00:09:06.732 "avg_latency_us": 5490.6642599622755, 00:09:06.732 "min_latency_us": 1495.9304347826087, 00:09:06.733 "max_latency_us": 9915.881739130435 00:09:06.733 } 00:09:06.733 ], 00:09:06.733 "core_count": 1 00:09:06.733 } 00:09:06.733 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 593246 00:09:06.733 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 593246 ']' 00:09:06.733 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 593246 00:09:06.733 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:06.733 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.733 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 593246 00:09:06.733 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:06.733 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:06.992 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 593246' 00:09:06.992 killing process with pid 593246 00:09:06.992 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 593246 00:09:06.992 Received shutdown signal, test time was about 10.000000 seconds 00:09:06.992 00:09:06.992 Latency(us) 00:09:06.992 [2024-11-26T06:18:35.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.992 [2024-11-26T06:18:35.092Z] =================================================================================================================== 00:09:06.992 [2024-11-26T06:18:35.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:06.992 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 593246 00:09:06.992 07:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:07.251 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:07.511 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 00:09:07.511 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:07.511 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:07.511 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:07.511 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:07.771 [2024-11-26 07:18:35.757632] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:07.771 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 00:09:07.771 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:07.771 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 00:09:07.771 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.771 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.771 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.771 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.771 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.771 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.771 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:07.771 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:07.772 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 00:09:08.031 request: 00:09:08.031 { 00:09:08.031 "uuid": "0dd42ef9-3ffe-4313-beb7-9253251edf41", 00:09:08.031 "method": "bdev_lvol_get_lvstores", 00:09:08.031 "req_id": 1 00:09:08.031 } 00:09:08.031 Got JSON-RPC error response 00:09:08.031 response: 00:09:08.031 { 00:09:08.031 "code": -19, 00:09:08.031 "message": "No such device" 00:09:08.031 } 00:09:08.031 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:08.031 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.031 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:08.031 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.031 07:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.291 aio_bdev 00:09:08.291 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1e52e41d-4e44-42c9-abfa-34457d5e6935 00:09:08.291 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1e52e41d-4e44-42c9-abfa-34457d5e6935 00:09:08.291 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.291 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:08.291 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.291 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.291 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:08.291 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1e52e41d-4e44-42c9-abfa-34457d5e6935 -t 2000 00:09:08.550 [ 00:09:08.550 { 00:09:08.550 "name": "1e52e41d-4e44-42c9-abfa-34457d5e6935", 00:09:08.550 "aliases": [ 00:09:08.551 "lvs/lvol" 00:09:08.551 ], 00:09:08.551 "product_name": "Logical Volume", 00:09:08.551 "block_size": 4096, 00:09:08.551 "num_blocks": 38912, 00:09:08.551 "uuid": "1e52e41d-4e44-42c9-abfa-34457d5e6935", 00:09:08.551 "assigned_rate_limits": { 00:09:08.551 "rw_ios_per_sec": 0, 00:09:08.551 "rw_mbytes_per_sec": 0, 00:09:08.551 "r_mbytes_per_sec": 0, 00:09:08.551 "w_mbytes_per_sec": 0 00:09:08.551 }, 00:09:08.551 "claimed": false, 00:09:08.551 "zoned": false, 00:09:08.551 "supported_io_types": { 00:09:08.551 "read": true, 00:09:08.551 "write": true, 00:09:08.551 "unmap": true, 00:09:08.551 "flush": false, 00:09:08.551 "reset": true, 00:09:08.551 "nvme_admin": false, 00:09:08.551 "nvme_io": false, 00:09:08.551 "nvme_io_md": false, 00:09:08.551 "write_zeroes": true, 00:09:08.551 "zcopy": false, 00:09:08.551 "get_zone_info": false, 00:09:08.551 "zone_management": false, 00:09:08.551 "zone_append": false, 00:09:08.551 "compare": false, 00:09:08.551 "compare_and_write": false, 00:09:08.551 "abort": false, 00:09:08.551 "seek_hole": true, 00:09:08.551 "seek_data": true, 00:09:08.551 "copy": false, 00:09:08.551 "nvme_iov_md": false 00:09:08.551 }, 00:09:08.551 "driver_specific": { 00:09:08.551 "lvol": { 00:09:08.551 "lvol_store_uuid": "0dd42ef9-3ffe-4313-beb7-9253251edf41", 00:09:08.551 "base_bdev": "aio_bdev", 00:09:08.551 "thin_provision": false, 00:09:08.551 "num_allocated_clusters": 38, 00:09:08.551 "snapshot": false, 00:09:08.551 "clone": false, 00:09:08.551 "esnap_clone": false 00:09:08.551 } 00:09:08.551 } 00:09:08.551 } 00:09:08.551 ] 00:09:08.551 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:08.551 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 00:09:08.551 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:08.810 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:08.810 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 00:09:08.810 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:09.070 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:09.070 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1e52e41d-4e44-42c9-abfa-34457d5e6935 00:09:09.070 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0dd42ef9-3ffe-4313-beb7-9253251edf41 00:09:09.330 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.589 00:09:09.589 real 0m15.506s 00:09:09.589 user 0m15.002s 00:09:09.589 sys 0m1.470s 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:09.589 ************************************ 00:09:09.589 END TEST lvs_grow_clean 00:09:09.589 ************************************ 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.589 ************************************ 00:09:09.589 START TEST lvs_grow_dirty 00:09:09.589 ************************************ 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.589 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.849 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:09.849 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:10.109 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7b1e6663-f803-42db-a006-7041839c18e2 00:09:10.109 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:10.109 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:10.368 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:10.368 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:10.368 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7b1e6663-f803-42db-a006-7041839c18e2 lvol 150 00:09:10.368 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a20385b9-9f87-4d6a-9053-fcd40ef8b575 00:09:10.368 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.368 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:10.626 [2024-11-26 07:18:38.575778] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:10.627 [2024-11-26 07:18:38.575826] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:10.627 true 00:09:10.627 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:10.627 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:10.886 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:10.886 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:10.886 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a20385b9-9f87-4d6a-9053-fcd40ef8b575 00:09:11.145 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:11.404 [2024-11-26 07:18:39.342084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.404 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:11.663 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=595850 00:09:11.663 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.663 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:11.663 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 595850 /var/tmp/bdevperf.sock 00:09:11.663 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 595850 ']' 00:09:11.663 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:11.663 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.663 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:11.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:11.663 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.663 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:11.663 [2024-11-26 07:18:39.595941] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:09:11.663 [2024-11-26 07:18:39.595994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595850 ] 00:09:11.663 [2024-11-26 07:18:39.658330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.663 [2024-11-26 07:18:39.701321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.922 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.922 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:11.922 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:12.181 Nvme0n1 00:09:12.181 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:12.441 [ 00:09:12.441 { 00:09:12.441 "name": "Nvme0n1", 00:09:12.441 "aliases": [ 00:09:12.441 "a20385b9-9f87-4d6a-9053-fcd40ef8b575" 00:09:12.441 ], 00:09:12.441 "product_name": "NVMe disk", 00:09:12.441 "block_size": 4096, 00:09:12.441 "num_blocks": 38912, 00:09:12.441 "uuid": "a20385b9-9f87-4d6a-9053-fcd40ef8b575", 00:09:12.441 "numa_id": 1, 00:09:12.441 "assigned_rate_limits": { 00:09:12.441 "rw_ios_per_sec": 0, 00:09:12.441 "rw_mbytes_per_sec": 0, 00:09:12.441 "r_mbytes_per_sec": 0, 00:09:12.441 "w_mbytes_per_sec": 0 00:09:12.441 }, 00:09:12.441 "claimed": false, 00:09:12.441 "zoned": false, 00:09:12.441 "supported_io_types": { 00:09:12.441 "read": true, 00:09:12.441 "write": true, 00:09:12.441 "unmap": true, 00:09:12.441 "flush": true, 00:09:12.441 "reset": true, 00:09:12.441 "nvme_admin": true, 00:09:12.441 "nvme_io": true, 00:09:12.441 "nvme_io_md": false, 00:09:12.441 "write_zeroes": true, 00:09:12.441 "zcopy": false, 00:09:12.441 "get_zone_info": false, 00:09:12.441 "zone_management": false, 00:09:12.441 "zone_append": false, 00:09:12.441 "compare": true, 00:09:12.441 "compare_and_write": true, 00:09:12.441 "abort": true, 00:09:12.441 "seek_hole": false, 00:09:12.441 "seek_data": false, 00:09:12.441 "copy": true, 00:09:12.441 "nvme_iov_md": false 00:09:12.441 }, 00:09:12.441 "memory_domains": [ 00:09:12.441 { 00:09:12.441 "dma_device_id": "system", 00:09:12.441 "dma_device_type": 1 00:09:12.441 } 00:09:12.441 ], 00:09:12.441 "driver_specific": { 00:09:12.441 "nvme": [ 00:09:12.441 { 00:09:12.441 "trid": { 00:09:12.441 "trtype": "TCP", 00:09:12.441 "adrfam": "IPv4", 00:09:12.441 "traddr": "10.0.0.2", 00:09:12.441 "trsvcid": "4420", 00:09:12.441 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:12.441 }, 00:09:12.441 "ctrlr_data": { 00:09:12.441 "cntlid": 1, 00:09:12.441 "vendor_id": "0x8086", 00:09:12.441 "model_number": "SPDK bdev Controller", 00:09:12.441 "serial_number": "SPDK0", 00:09:12.441 "firmware_revision": "25.01", 00:09:12.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.441 "oacs": { 00:09:12.441 "security": 0, 00:09:12.441 "format": 0, 00:09:12.441 "firmware": 0, 00:09:12.441 "ns_manage": 0 00:09:12.441 }, 00:09:12.441 "multi_ctrlr": true, 00:09:12.441 "ana_reporting": false 00:09:12.441 }, 00:09:12.441 "vs": { 00:09:12.441 "nvme_version": "1.3" 00:09:12.441 }, 00:09:12.441 "ns_data": { 00:09:12.441 "id": 1, 00:09:12.441 "can_share": true 00:09:12.441 } 00:09:12.441 } 00:09:12.441 ], 00:09:12.441 "mp_policy": "active_passive" 00:09:12.441 } 00:09:12.441 } 00:09:12.441 ] 00:09:12.441 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=596076 00:09:12.441 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:12.441 07:18:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:12.441 Running I/O for 10 seconds... 00:09:13.820 Latency(us) 00:09:13.820 [2024-11-26T06:18:41.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.820 Nvme0n1 : 1.00 22886.00 89.40 0.00 0.00 0.00 0.00 0.00 00:09:13.820 [2024-11-26T06:18:41.920Z] =================================================================================================================== 00:09:13.820 [2024-11-26T06:18:41.920Z] Total : 22886.00 89.40 0.00 0.00 0.00 0.00 0.00 00:09:13.820 00:09:14.389 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:14.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.649 Nvme0n1 : 2.00 23065.00 90.10 0.00 0.00 0.00 0.00 0.00 00:09:14.649 [2024-11-26T06:18:42.749Z] =================================================================================================================== 00:09:14.649 [2024-11-26T06:18:42.749Z] Total : 23065.00 90.10 0.00 0.00 0.00 0.00 0.00 00:09:14.649 00:09:14.649 true 00:09:14.649 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:14.649 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:14.908 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:14.908 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:14.908 07:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 596076 00:09:15.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.478 Nvme0n1 : 3.00 23145.33 90.41 0.00 0.00 0.00 0.00 0.00 00:09:15.478 [2024-11-26T06:18:43.578Z] =================================================================================================================== 00:09:15.478 [2024-11-26T06:18:43.578Z] Total : 23145.33 90.41 0.00 0.00 0.00 0.00 0.00 00:09:15.478 00:09:16.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.859 Nvme0n1 : 4.00 23217.75 90.69 0.00 0.00 0.00 0.00 0.00 00:09:16.859 [2024-11-26T06:18:44.959Z] =================================================================================================================== 00:09:16.859 [2024-11-26T06:18:44.959Z] Total : 23217.75 90.69 0.00 0.00 0.00 0.00 0.00 00:09:16.859 00:09:17.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.798 Nvme0n1 : 5.00 23287.80 90.97 0.00 0.00 0.00 0.00 0.00 00:09:17.798 [2024-11-26T06:18:45.898Z] =================================================================================================================== 00:09:17.798 [2024-11-26T06:18:45.898Z] Total : 23287.80 90.97 0.00 0.00 0.00 0.00 0.00 00:09:17.798 00:09:18.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.737 Nvme0n1 : 6.00 23328.17 91.13 0.00 0.00 0.00 0.00 0.00 00:09:18.737 [2024-11-26T06:18:46.837Z] =================================================================================================================== 00:09:18.737 [2024-11-26T06:18:46.837Z] Total : 23328.17 91.13 0.00 0.00 0.00 0.00 0.00 00:09:18.737 00:09:19.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.678 Nvme0n1 : 7.00 23354.00 91.23 0.00 0.00 0.00 0.00 0.00 00:09:19.678 [2024-11-26T06:18:47.778Z] =================================================================================================================== 00:09:19.678 [2024-11-26T06:18:47.778Z] Total : 23354.00 91.23 0.00 0.00 0.00 0.00 0.00 00:09:19.678 00:09:20.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.616 Nvme0n1 : 8.00 23371.62 91.30 0.00 0.00 0.00 0.00 0.00 00:09:20.616 [2024-11-26T06:18:48.716Z] =================================================================================================================== 00:09:20.616 [2024-11-26T06:18:48.716Z] Total : 23371.62 91.30 0.00 0.00 0.00 0.00 0.00 00:09:20.616 00:09:21.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.571 Nvme0n1 : 9.00 23393.22 91.38 0.00 0.00 0.00 0.00 0.00 00:09:21.571 [2024-11-26T06:18:49.671Z] =================================================================================================================== 00:09:21.571 [2024-11-26T06:18:49.671Z] Total : 23393.22 91.38 0.00 0.00 0.00 0.00 0.00 00:09:21.571 00:09:22.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.508 Nvme0n1 : 10.00 23390.70 91.37 0.00 0.00 0.00 0.00 0.00 00:09:22.508 [2024-11-26T06:18:50.608Z] =================================================================================================================== 00:09:22.508 [2024-11-26T06:18:50.608Z] Total : 23390.70 91.37 0.00 0.00 0.00 0.00 0.00 00:09:22.508 00:09:22.508 00:09:22.508 Latency(us) 00:09:22.508 [2024-11-26T06:18:50.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.508 Nvme0n1 : 10.00 23391.86 91.37 0.00 0.00 5469.11 3191.32 10884.67 00:09:22.508 [2024-11-26T06:18:50.608Z] =================================================================================================================== 00:09:22.508 [2024-11-26T06:18:50.608Z] Total : 23391.86 91.37 0.00 0.00 5469.11 3191.32 10884.67 00:09:22.508 { 00:09:22.508 "results": [ 00:09:22.508 { 00:09:22.508 "job": "Nvme0n1", 00:09:22.508 "core_mask": "0x2", 00:09:22.508 "workload": "randwrite", 00:09:22.508 "status": "finished", 00:09:22.508 "queue_depth": 128, 00:09:22.508 "io_size": 4096, 00:09:22.508 "runtime": 10.004974, 00:09:22.508 "iops": 23391.864886405503, 00:09:22.508 "mibps": 91.3744722125215, 00:09:22.508 "io_failed": 0, 00:09:22.508 "io_timeout": 0, 00:09:22.508 "avg_latency_us": 5469.105336715708, 00:09:22.509 "min_latency_us": 3191.318260869565, 00:09:22.509 "max_latency_us": 10884.674782608696 00:09:22.509 } 00:09:22.509 ], 00:09:22.509 "core_count": 1 00:09:22.509 } 00:09:22.509 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 595850 00:09:22.509 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 595850 ']' 00:09:22.509 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 595850 00:09:22.509 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:22.509 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.509 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 595850 00:09:22.767 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:22.767 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:22.767 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 595850' 00:09:22.767 killing process with pid 595850 00:09:22.767 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 595850 00:09:22.767 Received shutdown signal, test time was about 10.000000 seconds 00:09:22.767 00:09:22.767 Latency(us) 00:09:22.767 [2024-11-26T06:18:50.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.767 [2024-11-26T06:18:50.867Z] =================================================================================================================== 00:09:22.767 [2024-11-26T06:18:50.867Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.767 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 595850 00:09:22.767 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.027 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.286 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:23.286 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:23.286 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:23.286 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:23.286 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 592744 00:09:23.286 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 592744 00:09:23.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 592744 Killed "${NVMF_APP[@]}" "$@" 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=597925 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 597925 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 597925 ']' 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.546 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.546 [2024-11-26 07:18:51.458778] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:09:23.546 [2024-11-26 07:18:51.458824] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.546 [2024-11-26 07:18:51.525518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.546 [2024-11-26 07:18:51.566688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.546 [2024-11-26 07:18:51.566723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.546 [2024-11-26 07:18:51.566730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.546 [2024-11-26 07:18:51.566736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.546 [2024-11-26 07:18:51.566741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.546 [2024-11-26 07:18:51.567291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.805 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:23.806 [2024-11-26 07:18:51.864601] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:23.806 [2024-11-26 07:18:51.864698] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:23.806 [2024-11-26 07:18:51.864724] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a20385b9-9f87-4d6a-9053-fcd40ef8b575 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a20385b9-9f87-4d6a-9053-fcd40ef8b575 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.806 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:24.065 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a20385b9-9f87-4d6a-9053-fcd40ef8b575 -t 2000 00:09:24.325 [ 00:09:24.325 { 00:09:24.325 "name": "a20385b9-9f87-4d6a-9053-fcd40ef8b575", 00:09:24.325 "aliases": [ 00:09:24.325 "lvs/lvol" 00:09:24.325 ], 00:09:24.325 "product_name": "Logical Volume", 00:09:24.325 "block_size": 4096, 00:09:24.325 "num_blocks": 38912, 00:09:24.325 "uuid": "a20385b9-9f87-4d6a-9053-fcd40ef8b575", 00:09:24.325 "assigned_rate_limits": { 00:09:24.325 "rw_ios_per_sec": 0, 00:09:24.325 "rw_mbytes_per_sec": 0, 00:09:24.325 "r_mbytes_per_sec": 0, 00:09:24.325 "w_mbytes_per_sec": 0 00:09:24.325 }, 00:09:24.325 "claimed": false, 00:09:24.325 "zoned": false, 00:09:24.325 "supported_io_types": { 00:09:24.325 "read": true, 00:09:24.325 "write": true, 00:09:24.325 "unmap": true, 00:09:24.325 "flush": false, 00:09:24.325 "reset": true, 00:09:24.325 "nvme_admin": false, 00:09:24.325 "nvme_io": false, 00:09:24.325 "nvme_io_md": false, 00:09:24.325 "write_zeroes": true, 00:09:24.325 "zcopy": false, 00:09:24.325 "get_zone_info": false, 00:09:24.325 "zone_management": false, 00:09:24.325 "zone_append": false, 00:09:24.325 "compare": false, 00:09:24.325 "compare_and_write": false, 00:09:24.325 "abort": false, 00:09:24.325 "seek_hole": true, 00:09:24.325 "seek_data": true, 00:09:24.325 "copy": false, 00:09:24.325 "nvme_iov_md": false 00:09:24.325 }, 00:09:24.325 "driver_specific": { 00:09:24.325 "lvol": { 00:09:24.325 "lvol_store_uuid": "7b1e6663-f803-42db-a006-7041839c18e2", 00:09:24.325 "base_bdev": "aio_bdev", 00:09:24.325 "thin_provision": false, 00:09:24.325 "num_allocated_clusters": 38, 00:09:24.325 "snapshot": false, 00:09:24.325 "clone": false, 00:09:24.325 "esnap_clone": false 00:09:24.325 } 00:09:24.325 } 00:09:24.325 } 00:09:24.325 ] 00:09:24.325 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:24.325 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:24.325 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:24.584 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:24.584 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:24.584 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:24.584 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:24.584 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:24.844 [2024-11-26 07:18:52.817667] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:24.844 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:25.104 request: 00:09:25.104 { 00:09:25.104 "uuid": "7b1e6663-f803-42db-a006-7041839c18e2", 00:09:25.104 "method": "bdev_lvol_get_lvstores", 00:09:25.104 "req_id": 1 00:09:25.104 } 00:09:25.104 Got JSON-RPC error response 00:09:25.104 response: 00:09:25.104 { 00:09:25.104 "code": -19, 00:09:25.104 "message": "No such device" 00:09:25.104 } 00:09:25.104 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:25.104 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.104 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:25.104 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.104 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:25.364 aio_bdev 00:09:25.364 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a20385b9-9f87-4d6a-9053-fcd40ef8b575 00:09:25.364 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a20385b9-9f87-4d6a-9053-fcd40ef8b575 00:09:25.364 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.364 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:25.364 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.364 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.364 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.364 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a20385b9-9f87-4d6a-9053-fcd40ef8b575 -t 2000 00:09:25.624 [ 00:09:25.624 { 00:09:25.624 "name": "a20385b9-9f87-4d6a-9053-fcd40ef8b575", 00:09:25.624 "aliases": [ 00:09:25.624 "lvs/lvol" 00:09:25.624 ], 00:09:25.624 "product_name": "Logical Volume", 00:09:25.624 "block_size": 4096, 00:09:25.624 "num_blocks": 38912, 00:09:25.624 "uuid": "a20385b9-9f87-4d6a-9053-fcd40ef8b575", 00:09:25.624 "assigned_rate_limits": { 00:09:25.624 "rw_ios_per_sec": 0, 00:09:25.624 "rw_mbytes_per_sec": 0, 00:09:25.624 "r_mbytes_per_sec": 0, 00:09:25.624 "w_mbytes_per_sec": 0 00:09:25.624 }, 00:09:25.624 "claimed": false, 00:09:25.624 "zoned": false, 00:09:25.624 "supported_io_types": { 00:09:25.624 "read": true, 00:09:25.624 "write": true, 00:09:25.624 "unmap": true, 00:09:25.624 "flush": false, 00:09:25.624 "reset": true, 00:09:25.624 "nvme_admin": false, 00:09:25.624 "nvme_io": false, 00:09:25.624 "nvme_io_md": false, 00:09:25.624 "write_zeroes": true, 00:09:25.624 "zcopy": false, 00:09:25.624 "get_zone_info": false, 00:09:25.624 "zone_management": false, 00:09:25.624 "zone_append": false, 00:09:25.624 "compare": false, 00:09:25.624 "compare_and_write": false, 00:09:25.624 "abort": false, 00:09:25.624 "seek_hole": true, 00:09:25.624 "seek_data": true, 00:09:25.624 "copy": false, 00:09:25.624 "nvme_iov_md": false 00:09:25.624 }, 00:09:25.624 "driver_specific": { 00:09:25.624 "lvol": { 00:09:25.624 "lvol_store_uuid": "7b1e6663-f803-42db-a006-7041839c18e2", 00:09:25.624 "base_bdev": "aio_bdev", 00:09:25.624 "thin_provision": false, 00:09:25.624 "num_allocated_clusters": 38, 00:09:25.624 "snapshot": false, 00:09:25.624 "clone": false, 00:09:25.624 "esnap_clone": false 00:09:25.624 } 00:09:25.624 } 00:09:25.624 } 00:09:25.624 ] 00:09:25.624 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:25.624 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:25.624 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:25.884 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:25.884 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:25.884 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:26.144 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:26.144 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a20385b9-9f87-4d6a-9053-fcd40ef8b575 00:09:26.144 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7b1e6663-f803-42db-a006-7041839c18e2 00:09:26.415 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.673 00:09:26.673 real 0m17.012s 00:09:26.673 user 0m43.796s 00:09:26.673 sys 0m3.820s 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:26.673 ************************************ 00:09:26.673 END TEST lvs_grow_dirty 00:09:26.673 ************************************ 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:26.673 nvmf_trace.0 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.673 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.673 rmmod nvme_tcp 00:09:26.673 rmmod nvme_fabrics 00:09:26.673 rmmod nvme_keyring 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 597925 ']' 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 597925 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 597925 ']' 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 597925 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597925 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597925' 00:09:26.933 killing process with pid 597925 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 597925 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 597925 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.933 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.529 00:09:29.529 real 0m41.211s 00:09:29.529 user 1m4.218s 00:09:29.529 sys 0m9.816s 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:29.529 ************************************ 00:09:29.529 END TEST nvmf_lvs_grow 00:09:29.529 ************************************ 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.529 ************************************ 00:09:29.529 START TEST nvmf_bdev_io_wait 00:09:29.529 ************************************ 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:29.529 * Looking for test storage... 00:09:29.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.529 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.530 --rc genhtml_branch_coverage=1 00:09:29.530 --rc genhtml_function_coverage=1 00:09:29.530 --rc genhtml_legend=1 00:09:29.530 --rc geninfo_all_blocks=1 00:09:29.530 --rc geninfo_unexecuted_blocks=1 00:09:29.530 00:09:29.530 ' 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.530 --rc genhtml_branch_coverage=1 00:09:29.530 --rc genhtml_function_coverage=1 00:09:29.530 --rc genhtml_legend=1 00:09:29.530 --rc geninfo_all_blocks=1 00:09:29.530 --rc geninfo_unexecuted_blocks=1 00:09:29.530 00:09:29.530 ' 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.530 --rc genhtml_branch_coverage=1 00:09:29.530 --rc genhtml_function_coverage=1 00:09:29.530 --rc genhtml_legend=1 00:09:29.530 --rc geninfo_all_blocks=1 00:09:29.530 --rc geninfo_unexecuted_blocks=1 00:09:29.530 00:09:29.530 ' 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.530 --rc genhtml_branch_coverage=1 00:09:29.530 --rc genhtml_function_coverage=1 00:09:29.530 --rc genhtml_legend=1 00:09:29.530 --rc geninfo_all_blocks=1 00:09:29.530 --rc geninfo_unexecuted_blocks=1 00:09:29.530 00:09:29.530 ' 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.530 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.531 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.531 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.531 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.531 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.531 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.531 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.531 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.531 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.531 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.531 07:18:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:34.811 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:34.811 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.811 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:34.812 Found net devices under 0000:86:00.0: cvl_0_0 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:34.812 Found net devices under 0000:86:00.1: cvl_0_1 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:09:34.812 00:09:34.812 --- 10.0.0.2 ping statistics --- 00:09:34.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.812 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:09:34.812 00:09:34.812 --- 10.0.0.1 ping statistics --- 00:09:34.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.812 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=601977 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 601977 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 601977 ']' 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.812 [2024-11-26 07:19:02.357820] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:09:34.812 [2024-11-26 07:19:02.357869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.812 [2024-11-26 07:19:02.424601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.812 [2024-11-26 07:19:02.469091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.812 [2024-11-26 07:19:02.469127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.812 [2024-11-26 07:19:02.469135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.812 [2024-11-26 07:19:02.469141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.812 [2024-11-26 07:19:02.469146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.812 [2024-11-26 07:19:02.470745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.812 [2024-11-26 07:19:02.470846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.812 [2024-11-26 07:19:02.470941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.812 [2024-11-26 07:19:02.470943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.812 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.813 [2024-11-26 07:19:02.615434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.813 Malloc0 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.813 [2024-11-26 07:19:02.662768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=602006 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=602008 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.813 { 00:09:34.813 "params": { 00:09:34.813 "name": "Nvme$subsystem", 00:09:34.813 "trtype": "$TEST_TRANSPORT", 00:09:34.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.813 "adrfam": "ipv4", 00:09:34.813 "trsvcid": "$NVMF_PORT", 00:09:34.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.813 "hdgst": ${hdgst:-false}, 00:09:34.813 "ddgst": ${ddgst:-false} 00:09:34.813 }, 00:09:34.813 "method": "bdev_nvme_attach_controller" 00:09:34.813 } 00:09:34.813 EOF 00:09:34.813 )") 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=602010 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.813 { 00:09:34.813 "params": { 00:09:34.813 "name": "Nvme$subsystem", 00:09:34.813 "trtype": "$TEST_TRANSPORT", 00:09:34.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.813 "adrfam": "ipv4", 00:09:34.813 "trsvcid": "$NVMF_PORT", 00:09:34.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.813 "hdgst": ${hdgst:-false}, 00:09:34.813 "ddgst": ${ddgst:-false} 00:09:34.813 }, 00:09:34.813 "method": "bdev_nvme_attach_controller" 00:09:34.813 } 00:09:34.813 EOF 00:09:34.813 )") 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=602013 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.813 { 00:09:34.813 "params": { 00:09:34.813 "name": "Nvme$subsystem", 00:09:34.813 "trtype": "$TEST_TRANSPORT", 00:09:34.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.813 "adrfam": "ipv4", 00:09:34.813 "trsvcid": "$NVMF_PORT", 00:09:34.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.813 "hdgst": ${hdgst:-false}, 00:09:34.813 "ddgst": ${ddgst:-false} 00:09:34.813 }, 00:09:34.813 "method": "bdev_nvme_attach_controller" 00:09:34.813 } 00:09:34.813 EOF 00:09:34.813 )") 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.813 { 00:09:34.813 "params": { 00:09:34.813 "name": "Nvme$subsystem", 00:09:34.813 "trtype": "$TEST_TRANSPORT", 00:09:34.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.813 "adrfam": "ipv4", 00:09:34.813 "trsvcid": "$NVMF_PORT", 00:09:34.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.813 "hdgst": ${hdgst:-false}, 00:09:34.813 "ddgst": ${ddgst:-false} 00:09:34.813 }, 00:09:34.813 "method": "bdev_nvme_attach_controller" 00:09:34.813 } 00:09:34.813 EOF 00:09:34.813 )") 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 602006 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.813 "params": { 00:09:34.813 "name": "Nvme1", 00:09:34.813 "trtype": "tcp", 00:09:34.813 "traddr": "10.0.0.2", 00:09:34.813 "adrfam": "ipv4", 00:09:34.813 "trsvcid": "4420", 00:09:34.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.813 "hdgst": false, 00:09:34.813 "ddgst": false 00:09:34.813 }, 00:09:34.813 "method": "bdev_nvme_attach_controller" 00:09:34.813 }' 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.813 "params": { 00:09:34.813 "name": "Nvme1", 00:09:34.813 "trtype": "tcp", 00:09:34.813 "traddr": "10.0.0.2", 00:09:34.813 "adrfam": "ipv4", 00:09:34.813 "trsvcid": "4420", 00:09:34.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.813 "hdgst": false, 00:09:34.813 "ddgst": false 00:09:34.813 }, 00:09:34.813 "method": "bdev_nvme_attach_controller" 00:09:34.813 }' 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.813 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.813 "params": { 00:09:34.813 "name": "Nvme1", 00:09:34.813 "trtype": "tcp", 00:09:34.813 "traddr": "10.0.0.2", 00:09:34.814 "adrfam": "ipv4", 00:09:34.814 "trsvcid": "4420", 00:09:34.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.814 "hdgst": false, 00:09:34.814 "ddgst": false 00:09:34.814 }, 00:09:34.814 "method": "bdev_nvme_attach_controller" 00:09:34.814 }' 00:09:34.814 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.814 07:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.814 "params": { 00:09:34.814 "name": "Nvme1", 00:09:34.814 "trtype": "tcp", 00:09:34.814 "traddr": "10.0.0.2", 00:09:34.814 "adrfam": "ipv4", 00:09:34.814 "trsvcid": "4420", 00:09:34.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.814 "hdgst": false, 00:09:34.814 "ddgst": false 00:09:34.814 }, 00:09:34.814 "method": "bdev_nvme_attach_controller" 00:09:34.814 }' 00:09:34.814 [2024-11-26 07:19:02.712859] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:09:34.814 [2024-11-26 07:19:02.712906] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:34.814 [2024-11-26 07:19:02.714854] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:09:34.814 [2024-11-26 07:19:02.714901] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:34.814 [2024-11-26 07:19:02.717882] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:09:34.814 [2024-11-26 07:19:02.717925] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:34.814 [2024-11-26 07:19:02.719253] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:09:34.814 [2024-11-26 07:19:02.719295] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:34.814 [2024-11-26 07:19:02.904303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.074 [2024-11-26 07:19:02.947307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:35.074 [2024-11-26 07:19:03.019577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.074 [2024-11-26 07:19:03.070556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.074 [2024-11-26 07:19:03.078581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:35.074 [2024-11-26 07:19:03.112567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.074 [2024-11-26 07:19:03.113526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:35.074 [2024-11-26 07:19:03.155538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:35.334 Running I/O for 1 seconds... 00:09:35.334 Running I/O for 1 seconds... 00:09:35.334 Running I/O for 1 seconds... 00:09:35.334 Running I/O for 1 seconds... 00:09:36.273 11827.00 IOPS, 46.20 MiB/s 00:09:36.273 Latency(us) 00:09:36.273 [2024-11-26T06:19:04.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.273 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:36.273 Nvme1n1 : 1.01 11884.18 46.42 0.00 0.00 10729.27 5926.73 15728.64 00:09:36.273 [2024-11-26T06:19:04.373Z] =================================================================================================================== 00:09:36.273 [2024-11-26T06:19:04.373Z] Total : 11884.18 46.42 0.00 0.00 10729.27 5926.73 15728.64 00:09:36.273 9454.00 IOPS, 36.93 MiB/s 00:09:36.273 Latency(us) 00:09:36.273 [2024-11-26T06:19:04.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.273 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:36.273 Nvme1n1 : 1.01 9511.92 37.16 0.00 0.00 13403.98 6496.61 20629.59 00:09:36.273 [2024-11-26T06:19:04.373Z] =================================================================================================================== 00:09:36.273 [2024-11-26T06:19:04.373Z] Total : 9511.92 37.16 0.00 0.00 13403.98 6496.61 20629.59 00:09:36.532 237696.00 IOPS, 928.50 MiB/s 00:09:36.532 Latency(us) 00:09:36.532 [2024-11-26T06:19:04.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.532 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:36.532 Nvme1n1 : 1.00 237324.28 927.05 0.00 0.00 536.23 231.51 1531.55 00:09:36.532 [2024-11-26T06:19:04.632Z] =================================================================================================================== 00:09:36.532 [2024-11-26T06:19:04.632Z] Total : 237324.28 927.05 0.00 0.00 536.23 231.51 1531.55 00:09:36.532 10962.00 IOPS, 42.82 MiB/s 00:09:36.532 Latency(us) 00:09:36.532 [2024-11-26T06:19:04.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.532 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:36.532 Nvme1n1 : 1.01 11051.71 43.17 0.00 0.00 11553.68 3433.52 24504.77 00:09:36.532 [2024-11-26T06:19:04.632Z] =================================================================================================================== 00:09:36.532 [2024-11-26T06:19:04.632Z] Total : 11051.71 43.17 0.00 0.00 11553.68 3433.52 24504.77 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 602008 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 602010 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 602013 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.532 rmmod nvme_tcp 00:09:36.532 rmmod nvme_fabrics 00:09:36.532 rmmod nvme_keyring 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 601977 ']' 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 601977 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 601977 ']' 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 601977 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.532 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 601977 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 601977' 00:09:36.792 killing process with pid 601977 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 601977 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 601977 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.792 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.348 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:39.348 00:09:39.348 real 0m9.751s 00:09:39.348 user 0m15.797s 00:09:39.348 sys 0m5.517s 00:09:39.348 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.348 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.348 ************************************ 00:09:39.348 END TEST nvmf_bdev_io_wait 00:09:39.348 ************************************ 00:09:39.348 07:19:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:39.348 07:19:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.348 07:19:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.348 07:19:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.348 ************************************ 00:09:39.348 START TEST nvmf_queue_depth 00:09:39.348 ************************************ 00:09:39.348 07:19:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:39.348 * Looking for test storage... 00:09:39.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.348 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:39.348 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.348 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:39.348 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.348 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.348 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.348 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.348 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.348 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.348 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.348 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.349 --rc genhtml_branch_coverage=1 00:09:39.349 --rc genhtml_function_coverage=1 00:09:39.349 --rc genhtml_legend=1 00:09:39.349 --rc geninfo_all_blocks=1 00:09:39.349 --rc geninfo_unexecuted_blocks=1 00:09:39.349 00:09:39.349 ' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.349 --rc genhtml_branch_coverage=1 00:09:39.349 --rc genhtml_function_coverage=1 00:09:39.349 --rc genhtml_legend=1 00:09:39.349 --rc geninfo_all_blocks=1 00:09:39.349 --rc geninfo_unexecuted_blocks=1 00:09:39.349 00:09:39.349 ' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.349 --rc genhtml_branch_coverage=1 00:09:39.349 --rc genhtml_function_coverage=1 00:09:39.349 --rc genhtml_legend=1 00:09:39.349 --rc geninfo_all_blocks=1 00:09:39.349 --rc geninfo_unexecuted_blocks=1 00:09:39.349 00:09:39.349 ' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.349 --rc genhtml_branch_coverage=1 00:09:39.349 --rc genhtml_function_coverage=1 00:09:39.349 --rc genhtml_legend=1 00:09:39.349 --rc geninfo_all_blocks=1 00:09:39.349 --rc geninfo_unexecuted_blocks=1 00:09:39.349 00:09:39.349 ' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:39.349 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:39.350 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:39.350 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:44.624 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:44.624 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:44.624 Found net devices under 0000:86:00.0: cvl_0_0 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:44.624 Found net devices under 0000:86:00.1: cvl_0_1 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:44.624 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:44.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:09:44.625 00:09:44.625 --- 10.0.0.2 ping statistics --- 00:09:44.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.625 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:09:44.625 00:09:44.625 --- 10.0.0.1 ping statistics --- 00:09:44.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.625 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=605806 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 605806 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 605806 ']' 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.625 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.884 [2024-11-26 07:19:12.754748] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:09:44.884 [2024-11-26 07:19:12.754802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.884 [2024-11-26 07:19:12.826024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.884 [2024-11-26 07:19:12.868570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.884 [2024-11-26 07:19:12.868602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.884 [2024-11-26 07:19:12.868610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.884 [2024-11-26 07:19:12.868618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.884 [2024-11-26 07:19:12.868623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.884 [2024-11-26 07:19:12.869195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.884 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.884 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:44.884 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:44.884 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.884 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.143 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.143 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:45.143 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.143 07:19:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.143 [2024-11-26 07:19:13.005226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.143 Malloc0 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.143 [2024-11-26 07:19:13.055766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=605943 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 605943 /var/tmp/bdevperf.sock 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 605943 ']' 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:45.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.143 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.143 [2024-11-26 07:19:13.107499] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:09:45.143 [2024-11-26 07:19:13.107544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605943 ] 00:09:45.143 [2024-11-26 07:19:13.172061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.143 [2024-11-26 07:19:13.215418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.406 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.406 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:45.406 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:45.406 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.406 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.406 NVMe0n1 00:09:45.406 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.406 07:19:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:45.665 Running I/O for 10 seconds... 00:09:47.541 11264.00 IOPS, 44.00 MiB/s [2024-11-26T06:19:16.580Z] 11719.50 IOPS, 45.78 MiB/s [2024-11-26T06:19:17.958Z] 11752.33 IOPS, 45.91 MiB/s [2024-11-26T06:19:18.896Z] 11872.00 IOPS, 46.38 MiB/s [2024-11-26T06:19:19.832Z] 11874.00 IOPS, 46.38 MiB/s [2024-11-26T06:19:20.768Z] 11957.50 IOPS, 46.71 MiB/s [2024-11-26T06:19:21.705Z] 11984.86 IOPS, 46.82 MiB/s [2024-11-26T06:19:22.642Z] 12024.25 IOPS, 46.97 MiB/s [2024-11-26T06:19:23.580Z] 12069.22 IOPS, 47.15 MiB/s [2024-11-26T06:19:23.839Z] 12102.00 IOPS, 47.27 MiB/s 00:09:55.739 Latency(us) 00:09:55.739 [2024-11-26T06:19:23.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.739 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:55.739 Verification LBA range: start 0x0 length 0x4000 00:09:55.739 NVMe0n1 : 10.05 12136.33 47.41 0.00 0.00 84057.23 14019.01 56076.02 00:09:55.739 [2024-11-26T06:19:23.839Z] =================================================================================================================== 00:09:55.739 [2024-11-26T06:19:23.839Z] Total : 12136.33 47.41 0.00 0.00 84057.23 14019.01 56076.02 00:09:55.739 { 00:09:55.739 "results": [ 00:09:55.739 { 00:09:55.739 "job": "NVMe0n1", 00:09:55.739 "core_mask": "0x1", 00:09:55.739 "workload": "verify", 00:09:55.739 "status": "finished", 00:09:55.739 "verify_range": { 00:09:55.739 "start": 0, 00:09:55.739 "length": 16384 00:09:55.739 }, 00:09:55.739 "queue_depth": 1024, 00:09:55.739 "io_size": 4096, 00:09:55.739 "runtime": 10.053208, 00:09:55.739 "iops": 12136.325041718026, 00:09:55.739 "mibps": 47.40751969421104, 00:09:55.739 "io_failed": 0, 00:09:55.739 "io_timeout": 0, 00:09:55.739 "avg_latency_us": 84057.22680671811, 00:09:55.739 "min_latency_us": 14019.005217391305, 00:09:55.739 "max_latency_us": 56076.02086956522 00:09:55.739 } 00:09:55.739 ], 00:09:55.739 "core_count": 1 00:09:55.739 } 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 605943 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 605943 ']' 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 605943 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 605943 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 605943' 00:09:55.739 killing process with pid 605943 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 605943 00:09:55.739 Received shutdown signal, test time was about 10.000000 seconds 00:09:55.739 00:09:55.739 Latency(us) 00:09:55.739 [2024-11-26T06:19:23.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.739 [2024-11-26T06:19:23.839Z] =================================================================================================================== 00:09:55.739 [2024-11-26T06:19:23.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 605943 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.739 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:55.739 rmmod nvme_tcp 00:09:55.998 rmmod nvme_fabrics 00:09:55.998 rmmod nvme_keyring 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 605806 ']' 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 605806 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 605806 ']' 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 605806 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 605806 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 605806' 00:09:55.998 killing process with pid 605806 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 605806 00:09:55.998 07:19:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 605806 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.257 07:19:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.163 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:58.163 00:09:58.163 real 0m19.239s 00:09:58.163 user 0m22.949s 00:09:58.163 sys 0m5.671s 00:09:58.163 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.163 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.163 ************************************ 00:09:58.163 END TEST nvmf_queue_depth 00:09:58.163 ************************************ 00:09:58.163 07:19:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:58.163 07:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.163 07:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.163 07:19:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.163 ************************************ 00:09:58.163 START TEST nvmf_target_multipath 00:09:58.163 ************************************ 00:09:58.163 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:58.423 * Looking for test storage... 00:09:58.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:58.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.423 --rc genhtml_branch_coverage=1 00:09:58.423 --rc genhtml_function_coverage=1 00:09:58.423 --rc genhtml_legend=1 00:09:58.423 --rc geninfo_all_blocks=1 00:09:58.423 --rc geninfo_unexecuted_blocks=1 00:09:58.423 00:09:58.423 ' 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:58.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.423 --rc genhtml_branch_coverage=1 00:09:58.423 --rc genhtml_function_coverage=1 00:09:58.423 --rc genhtml_legend=1 00:09:58.423 --rc geninfo_all_blocks=1 00:09:58.423 --rc geninfo_unexecuted_blocks=1 00:09:58.423 00:09:58.423 ' 00:09:58.423 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:58.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.423 --rc genhtml_branch_coverage=1 00:09:58.423 --rc genhtml_function_coverage=1 00:09:58.423 --rc genhtml_legend=1 00:09:58.423 --rc geninfo_all_blocks=1 00:09:58.424 --rc geninfo_unexecuted_blocks=1 00:09:58.424 00:09:58.424 ' 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:58.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.424 --rc genhtml_branch_coverage=1 00:09:58.424 --rc genhtml_function_coverage=1 00:09:58.424 --rc genhtml_legend=1 00:09:58.424 --rc geninfo_all_blocks=1 00:09:58.424 --rc geninfo_unexecuted_blocks=1 00:09:58.424 00:09:58.424 ' 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:58.424 07:19:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.699 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:03.700 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:03.700 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:03.700 Found net devices under 0000:86:00.0: cvl_0_0 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:03.700 Found net devices under 0000:86:00.1: cvl_0_1 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.700 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.976 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:10:03.977 00:10:03.977 --- 10.0.0.2 ping statistics --- 00:10:03.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.977 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:10:03.977 00:10:03.977 --- 10.0.0.1 ping statistics --- 00:10:03.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.977 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.977 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.977 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:03.977 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:03.977 only one NIC for nvmf test 00:10:03.977 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:03.977 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.977 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:03.977 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.977 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:03.977 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.977 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.977 rmmod nvme_tcp 00:10:03.977 rmmod nvme_fabrics 00:10:04.252 rmmod nvme_keyring 00:10:04.252 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.253 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:06.331 00:10:06.331 real 0m7.933s 00:10:06.331 user 0m1.737s 00:10:06.331 sys 0m4.179s 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:06.331 ************************************ 00:10:06.331 END TEST nvmf_target_multipath 00:10:06.331 ************************************ 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.331 ************************************ 00:10:06.331 START TEST nvmf_zcopy 00:10:06.331 ************************************ 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:06.331 * Looking for test storage... 00:10:06.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.331 --rc genhtml_branch_coverage=1 00:10:06.331 --rc genhtml_function_coverage=1 00:10:06.331 --rc genhtml_legend=1 00:10:06.331 --rc geninfo_all_blocks=1 00:10:06.331 --rc geninfo_unexecuted_blocks=1 00:10:06.331 00:10:06.331 ' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.331 --rc genhtml_branch_coverage=1 00:10:06.331 --rc genhtml_function_coverage=1 00:10:06.331 --rc genhtml_legend=1 00:10:06.331 --rc geninfo_all_blocks=1 00:10:06.331 --rc geninfo_unexecuted_blocks=1 00:10:06.331 00:10:06.331 ' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.331 --rc genhtml_branch_coverage=1 00:10:06.331 --rc genhtml_function_coverage=1 00:10:06.331 --rc genhtml_legend=1 00:10:06.331 --rc geninfo_all_blocks=1 00:10:06.331 --rc geninfo_unexecuted_blocks=1 00:10:06.331 00:10:06.331 ' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.331 --rc genhtml_branch_coverage=1 00:10:06.331 --rc genhtml_function_coverage=1 00:10:06.331 --rc genhtml_legend=1 00:10:06.331 --rc geninfo_all_blocks=1 00:10:06.331 --rc geninfo_unexecuted_blocks=1 00:10:06.331 00:10:06.331 ' 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.331 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.656 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:13.244 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.244 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:13.245 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:13.245 Found net devices under 0000:86:00.0: cvl_0_0 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:13.245 Found net devices under 0000:86:00.1: cvl_0_1 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:10:13.245 00:10:13.245 --- 10.0.0.2 ping statistics --- 00:10:13.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.245 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:10:13.245 00:10:13.245 --- 10.0.0.1 ping statistics --- 00:10:13.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.245 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=614738 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 614738 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 614738 ']' 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.245 [2024-11-26 07:19:40.379963] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:10:13.245 [2024-11-26 07:19:40.380015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.245 [2024-11-26 07:19:40.447991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.245 [2024-11-26 07:19:40.490179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.245 [2024-11-26 07:19:40.490218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.245 [2024-11-26 07:19:40.490225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.245 [2024-11-26 07:19:40.490231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.245 [2024-11-26 07:19:40.490236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.245 [2024-11-26 07:19:40.490814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:13.245 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.246 [2024-11-26 07:19:40.630762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.246 [2024-11-26 07:19:40.650965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.246 malloc0 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:13.246 { 00:10:13.246 "params": { 00:10:13.246 "name": "Nvme$subsystem", 00:10:13.246 "trtype": "$TEST_TRANSPORT", 00:10:13.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:13.246 "adrfam": "ipv4", 00:10:13.246 "trsvcid": "$NVMF_PORT", 00:10:13.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:13.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:13.246 "hdgst": ${hdgst:-false}, 00:10:13.246 "ddgst": ${ddgst:-false} 00:10:13.246 }, 00:10:13.246 "method": "bdev_nvme_attach_controller" 00:10:13.246 } 00:10:13.246 EOF 00:10:13.246 )") 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:13.246 07:19:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:13.246 "params": { 00:10:13.246 "name": "Nvme1", 00:10:13.246 "trtype": "tcp", 00:10:13.246 "traddr": "10.0.0.2", 00:10:13.246 "adrfam": "ipv4", 00:10:13.246 "trsvcid": "4420", 00:10:13.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:13.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:13.246 "hdgst": false, 00:10:13.246 "ddgst": false 00:10:13.246 }, 00:10:13.246 "method": "bdev_nvme_attach_controller" 00:10:13.246 }' 00:10:13.246 [2024-11-26 07:19:40.730504] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:10:13.246 [2024-11-26 07:19:40.730546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614804 ] 00:10:13.246 [2024-11-26 07:19:40.794301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.246 [2024-11-26 07:19:40.835730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.246 Running I/O for 10 seconds... 00:10:15.140 8411.00 IOPS, 65.71 MiB/s [2024-11-26T06:19:44.613Z] 8510.50 IOPS, 66.49 MiB/s [2024-11-26T06:19:45.550Z] 8542.33 IOPS, 66.74 MiB/s [2024-11-26T06:19:46.485Z] 8546.50 IOPS, 66.77 MiB/s [2024-11-26T06:19:47.421Z] 8555.40 IOPS, 66.84 MiB/s [2024-11-26T06:19:48.357Z] 8553.50 IOPS, 66.82 MiB/s [2024-11-26T06:19:49.296Z] 8552.57 IOPS, 66.82 MiB/s [2024-11-26T06:19:50.236Z] 8559.50 IOPS, 66.87 MiB/s [2024-11-26T06:19:51.613Z] 8561.67 IOPS, 66.89 MiB/s [2024-11-26T06:19:51.613Z] 8549.80 IOPS, 66.80 MiB/s 00:10:23.513 Latency(us) 00:10:23.513 [2024-11-26T06:19:51.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.513 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:23.513 Verification LBA range: start 0x0 length 0x1000 00:10:23.513 Nvme1n1 : 10.01 8554.59 66.83 0.00 0.00 14921.00 1852.10 22567.18 00:10:23.513 [2024-11-26T06:19:51.613Z] =================================================================================================================== 00:10:23.513 [2024-11-26T06:19:51.613Z] Total : 8554.59 66.83 0.00 0.00 14921.00 1852.10 22567.18 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=616597 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:23.513 { 00:10:23.513 "params": { 00:10:23.513 "name": "Nvme$subsystem", 00:10:23.513 "trtype": "$TEST_TRANSPORT", 00:10:23.513 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:23.513 "adrfam": "ipv4", 00:10:23.513 "trsvcid": "$NVMF_PORT", 00:10:23.513 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:23.513 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:23.513 "hdgst": ${hdgst:-false}, 00:10:23.513 "ddgst": ${ddgst:-false} 00:10:23.513 }, 00:10:23.513 "method": "bdev_nvme_attach_controller" 00:10:23.513 } 00:10:23.513 EOF 00:10:23.513 )") 00:10:23.513 [2024-11-26 07:19:51.398167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.398207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:23.513 07:19:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:23.513 "params": { 00:10:23.513 "name": "Nvme1", 00:10:23.513 "trtype": "tcp", 00:10:23.513 "traddr": "10.0.0.2", 00:10:23.513 "adrfam": "ipv4", 00:10:23.513 "trsvcid": "4420", 00:10:23.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:23.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:23.513 "hdgst": false, 00:10:23.513 "ddgst": false 00:10:23.513 }, 00:10:23.513 "method": "bdev_nvme_attach_controller" 00:10:23.513 }' 00:10:23.513 [2024-11-26 07:19:51.410169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.410188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.422209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.422221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.434219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.434229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.439561] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:10:23.513 [2024-11-26 07:19:51.439604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616597 ] 00:10:23.513 [2024-11-26 07:19:51.446252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.446263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.458282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.458293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.470315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.470326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.482349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.482359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.494382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.494393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.501922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.513 [2024-11-26 07:19:51.506411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.506422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.518445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.518461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.530476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.530487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.542508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.542520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.543933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.513 [2024-11-26 07:19:51.554548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.554563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.566579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.566600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.578608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.578621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.590636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.590648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.513 [2024-11-26 07:19:51.602674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.513 [2024-11-26 07:19:51.602692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.614700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.614710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.626732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.626742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.638781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.638803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.650806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.650821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.662837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.662852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.674868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.674879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.686901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.686912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.698932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.698944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.710975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.710990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.723004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.723014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.735036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.735047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.747069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.747080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.759104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.773 [2024-11-26 07:19:51.759119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.773 [2024-11-26 07:19:51.771137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.774 [2024-11-26 07:19:51.771147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.774 [2024-11-26 07:19:51.783167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.774 [2024-11-26 07:19:51.783177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.774 [2024-11-26 07:19:51.795201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.774 [2024-11-26 07:19:51.795214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.774 [2024-11-26 07:19:51.807234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.774 [2024-11-26 07:19:51.807246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.774 [2024-11-26 07:19:51.819263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.774 [2024-11-26 07:19:51.819273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.774 [2024-11-26 07:19:51.831300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.774 [2024-11-26 07:19:51.831311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.774 [2024-11-26 07:19:51.843333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.774 [2024-11-26 07:19:51.843345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.774 [2024-11-26 07:19:51.855377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.774 [2024-11-26 07:19:51.855395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.774 Running I/O for 5 seconds... 00:10:23.774 [2024-11-26 07:19:51.867401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.774 [2024-11-26 07:19:51.867412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.032 [2024-11-26 07:19:51.882871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.032 [2024-11-26 07:19:51.882893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.032 [2024-11-26 07:19:51.896728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.032 [2024-11-26 07:19:51.896749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.032 [2024-11-26 07:19:51.910691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.032 [2024-11-26 07:19:51.910712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.032 [2024-11-26 07:19:51.925194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.032 [2024-11-26 07:19:51.925215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.032 [2024-11-26 07:19:51.936416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.032 [2024-11-26 07:19:51.936436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:51.950538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:51.950558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:51.964091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:51.964111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:51.978215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:51.978236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:51.992095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:51.992116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:52.005753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:52.005773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:52.020045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:52.020066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:52.033767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:52.033789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:52.047625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:52.047645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:52.061895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:52.061915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:52.076249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:52.076269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:52.090558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:52.090577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:52.105037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:52.105055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.033 [2024-11-26 07:19:52.120124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.033 [2024-11-26 07:19:52.120143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.291 [2024-11-26 07:19:52.134466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.291 [2024-11-26 07:19:52.134486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.291 [2024-11-26 07:19:52.148526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.291 [2024-11-26 07:19:52.148545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.291 [2024-11-26 07:19:52.162366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.162390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.176768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.176788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.187854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.187874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.202346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.202366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.216571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.216590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.230336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.230354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.244012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.244031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.258316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.258336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.268981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.269000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.283503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.283522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.297655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.297675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.311550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.311570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.325526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.325546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.339693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.339713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.353943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.353968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.368219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.368238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.292 [2024-11-26 07:19:52.382071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.292 [2024-11-26 07:19:52.382091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.396365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.396385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.410735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.410755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.425804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.425824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.439483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.439502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.453312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.453331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.467136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.467156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.481147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.481167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.495234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.495254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.506171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.506190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.520717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.520736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.534530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.534550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.548545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.548564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.562171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.562191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.576444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.576464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.590393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.590413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.604271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.604296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.618148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.618169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.627400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.627420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.551 [2024-11-26 07:19:52.642113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.551 [2024-11-26 07:19:52.642133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.653080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.653099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.667031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.667050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.680756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.680776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.694876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.694895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.708915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.708936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.723177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.723197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.737424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.737444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.751850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.751869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.762752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.762771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.777405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.777424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.791182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.791201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.805704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.805724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.819664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.819683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.833924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.833945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.848070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.848091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.862054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.862083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 16527.00 IOPS, 129.12 MiB/s [2024-11-26T06:19:52.911Z] [2024-11-26 07:19:52.876215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.876235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.890465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.890486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.811 [2024-11-26 07:19:52.901256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.811 [2024-11-26 07:19:52.901292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:52.915736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:52.915756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:52.929798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:52.929818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:52.943710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:52.943729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:52.957585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:52.957604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:52.971831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:52.971850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:52.985583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:52.985603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:52.999640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:52.999661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:53.013678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:53.013698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:53.027252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:53.027273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:53.041131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:53.041151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:53.055167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:53.055188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:53.069309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:53.069330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:53.083689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:53.083710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:53.097792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:53.097813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:53.111671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:53.111691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:53.125788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:53.125813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:53.139898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:53.139919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.070 [2024-11-26 07:19:53.153698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.070 [2024-11-26 07:19:53.153719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.168315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.168336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.179654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.179673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.194513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.194534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.208677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.208698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.223487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.223507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.238611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.238631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.252910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.252930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.266984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.267004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.281328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.281348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.292272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.292293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.306723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.306744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.320358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.320378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.335189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.335208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.350244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.350264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.359928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.359954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.374638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.374657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.388697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.388718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.402339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.402359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.329 [2024-11-26 07:19:53.416815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.329 [2024-11-26 07:19:53.416834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.588 [2024-11-26 07:19:53.432613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.588 [2024-11-26 07:19:53.432633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.588 [2024-11-26 07:19:53.446201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.588 [2024-11-26 07:19:53.446221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.588 [2024-11-26 07:19:53.460625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.588 [2024-11-26 07:19:53.460645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.588 [2024-11-26 07:19:53.472181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.588 [2024-11-26 07:19:53.472201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.486505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.486525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.500715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.500736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.514759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.514779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.528998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.529017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.542658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.542678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.556919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.556938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.570363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.570384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.584716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.584736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.598941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.598966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.612685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.612705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.626736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.626756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.640888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.640908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.655509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.655530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.666595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.666617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.589 [2024-11-26 07:19:53.681009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.589 [2024-11-26 07:19:53.681030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.848 [2024-11-26 07:19:53.695327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.848 [2024-11-26 07:19:53.695348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.709723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.709744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.720766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.720787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.735630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.735650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.746806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.746825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.761033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.761053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.774997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.775017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.789245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.789265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.798852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.798872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.813156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.813176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.826684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.826704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.840953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.840973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.855592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.855612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.870754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.870774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 16514.50 IOPS, 129.02 MiB/s [2024-11-26T06:19:53.949Z] [2024-11-26 07:19:53.885763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.885782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.901088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.901112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.915532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.915551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.929736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.929756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.849 [2024-11-26 07:19:53.940799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.849 [2024-11-26 07:19:53.940818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:53.955616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:53.955635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:53.969968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:53.969988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:53.985384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:53.985403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:53.999408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:53.999428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.013642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.013661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.027635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.027660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.041496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.041515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.055818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.055838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.066564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.066584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.081137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.081158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.091826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.091848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.106251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.106272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.120039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.120058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.134032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.134052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.148220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.148240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.159451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.159478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.173722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.173743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.188041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.188062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.106 [2024-11-26 07:19:54.202058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.106 [2024-11-26 07:19:54.202078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.216188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.216207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.229887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.229907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.243774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.243793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.258126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.258145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.272343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.272364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.286290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.286309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.300107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.300126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.314088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.314107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.327957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.327977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.342492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.342512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.356596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.356616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.371035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.371055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.382413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.382433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.397070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.397091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.408024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.408045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.422357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.422383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.436411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.436432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.364 [2024-11-26 07:19:54.450611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.364 [2024-11-26 07:19:54.450633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.464653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.464676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.479054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.479075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.489577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.489597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.504347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.504368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.514998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.515020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.529572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.529597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.543571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.543591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.557513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.557534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.571725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.571746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.585232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.585253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.599680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.599701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.613837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.613857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.628364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.628384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.639364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.639384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.653704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.653725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.667700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.667721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.681689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.681714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.695681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.624 [2024-11-26 07:19:54.695702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.624 [2024-11-26 07:19:54.710301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.625 [2024-11-26 07:19:54.710321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.725718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.725739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.740377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.740397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.754354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.754375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.768562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.768583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.779341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.779361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.793896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.793915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.808109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.808129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.822080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.822100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.836304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.836324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.851201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.851221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.866519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.866539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 16522.67 IOPS, 129.08 MiB/s [2024-11-26T06:19:54.985Z] [2024-11-26 07:19:54.881238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.881258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.894896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.894915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.909041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.909061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.923452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.923471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.938760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.938780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.953130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.953150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.967172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.967192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.885 [2024-11-26 07:19:54.977820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.885 [2024-11-26 07:19:54.977839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.144 [2024-11-26 07:19:54.992360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.144 [2024-11-26 07:19:54.992379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.144 [2024-11-26 07:19:55.001304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.001323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.016453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.016473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.031718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.031737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.046003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.046023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.060005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.060025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.070611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.070632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.085118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.085138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.099109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.099130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.113295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.113315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.127365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.127385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.138220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.138240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.153230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.153251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.164872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.164892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.174582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.174601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.188927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.188953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.202494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.202514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.216707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.216726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.145 [2024-11-26 07:19:55.231320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.145 [2024-11-26 07:19:55.231340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.404 [2024-11-26 07:19:55.246955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.404 [2024-11-26 07:19:55.246976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.404 [2024-11-26 07:19:55.260838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.404 [2024-11-26 07:19:55.260858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.404 [2024-11-26 07:19:55.274497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.404 [2024-11-26 07:19:55.274517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.404 [2024-11-26 07:19:55.288835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.404 [2024-11-26 07:19:55.288854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.404 [2024-11-26 07:19:55.300053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.404 [2024-11-26 07:19:55.300072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.404 [2024-11-26 07:19:55.314506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.404 [2024-11-26 07:19:55.314526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.404 [2024-11-26 07:19:55.327640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.404 [2024-11-26 07:19:55.327660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.404 [2024-11-26 07:19:55.341892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.404 [2024-11-26 07:19:55.341912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.404 [2024-11-26 07:19:55.355879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.404 [2024-11-26 07:19:55.355898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.405 [2024-11-26 07:19:55.370251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.405 [2024-11-26 07:19:55.370270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.405 [2024-11-26 07:19:55.381761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.405 [2024-11-26 07:19:55.381780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.405 [2024-11-26 07:19:55.396651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.405 [2024-11-26 07:19:55.396670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.405 [2024-11-26 07:19:55.407652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.405 [2024-11-26 07:19:55.407673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.405 [2024-11-26 07:19:55.422100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.405 [2024-11-26 07:19:55.422119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.405 [2024-11-26 07:19:55.436106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.405 [2024-11-26 07:19:55.436136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.405 [2024-11-26 07:19:55.450390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.405 [2024-11-26 07:19:55.450411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.405 [2024-11-26 07:19:55.464301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.405 [2024-11-26 07:19:55.464324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.405 [2024-11-26 07:19:55.478381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.405 [2024-11-26 07:19:55.478400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.405 [2024-11-26 07:19:55.492587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.405 [2024-11-26 07:19:55.492606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.664 [2024-11-26 07:19:55.506623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.664 [2024-11-26 07:19:55.506643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.520617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.520637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.534885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.534906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.548711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.548731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.562722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.562742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.576908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.576926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.590818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.590838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.605256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.605276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.619194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.619214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.633502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.633521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.647087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.647106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.661119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.661138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.674891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.674911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.688835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.688855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.702879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.702899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.717025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.717049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.730645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.730665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.744534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.744554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.665 [2024-11-26 07:19:55.758598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.665 [2024-11-26 07:19:55.758618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.772326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.772346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.786199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.786219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.800103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.800124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.814195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.814216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.828248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.828270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.842058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.842080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.856391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.856413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.870138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.870159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 16535.75 IOPS, 129.19 MiB/s [2024-11-26T06:19:56.024Z] [2024-11-26 07:19:55.884071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.884092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.898275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.898296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.912250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.912270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.926804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.926824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.942250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.942270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.956396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.956416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.970662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.970682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.981577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.981603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:55.996197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:55.996218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.924 [2024-11-26 07:19:56.010104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.924 [2024-11-26 07:19:56.010125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.024540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.024560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.035411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.035431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.050155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.050176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.060963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.060983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.075073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.075093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.088899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.088920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.102995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.103016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.117285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.117310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.131381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.131402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.145106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.145128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.159168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.159188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.173274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.173294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.187306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.187327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.201811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.201832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.212726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.212746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.227807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.227827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.239100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.239125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.254201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.254220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.185 [2024-11-26 07:19:56.269468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.185 [2024-11-26 07:19:56.269487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.283625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.283645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.297839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.297859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.312394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.312413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.327937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.327964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.342509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.342529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.353157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.353177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.367416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.367435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.381325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.381344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.395825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.395845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.406479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.406498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.415995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.416015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.430898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.430917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.442004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.442024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.456771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.456790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.467418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.467438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.481929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.481956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.492396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.492415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.507583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.507602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.523157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.523177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.444 [2024-11-26 07:19:56.537914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.444 [2024-11-26 07:19:56.537933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.553445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.553465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.567631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.567651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.581598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.581617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.595633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.595652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.609557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.609576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.623473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.623493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.634148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.634168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.643676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.643695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.658684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.658703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.674800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.674819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.689026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.689046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.703322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.703342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.717650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.717670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.733345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.733366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.747748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.747768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.762022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.762042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.776357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.776376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.703 [2024-11-26 07:19:56.787791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.703 [2024-11-26 07:19:56.787811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.802232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.802252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.816120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.816140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.830668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.830687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.846723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.846742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.857604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.857624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.867048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.867068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 16513.40 IOPS, 129.01 MiB/s [2024-11-26T06:19:57.062Z] [2024-11-26 07:19:56.881693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.881714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 00:10:28.962 Latency(us) 00:10:28.962 [2024-11-26T06:19:57.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.962 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:28.962 Nvme1n1 : 5.01 16515.29 129.03 0.00 0.00 7743.03 3419.27 19375.86 00:10:28.962 [2024-11-26T06:19:57.062Z] =================================================================================================================== 00:10:28.962 [2024-11-26T06:19:57.062Z] Total : 16515.29 129.03 0.00 0.00 7743.03 3419.27 19375.86 00:10:28.962 [2024-11-26 07:19:56.891452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.891470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.903476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.903492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.915518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.915534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.927545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.927561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.939574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.939588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.951603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.951623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.962 [2024-11-26 07:19:56.963638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.962 [2024-11-26 07:19:56.963652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.963 [2024-11-26 07:19:56.975669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.963 [2024-11-26 07:19:56.975682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.963 [2024-11-26 07:19:56.987702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.963 [2024-11-26 07:19:56.987716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.963 [2024-11-26 07:19:56.999729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.963 [2024-11-26 07:19:56.999739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.963 [2024-11-26 07:19:57.011762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.963 [2024-11-26 07:19:57.011772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.963 [2024-11-26 07:19:57.023793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.963 [2024-11-26 07:19:57.023804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.963 [2024-11-26 07:19:57.035821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.963 [2024-11-26 07:19:57.035831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.963 [2024-11-26 07:19:57.047854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.963 [2024-11-26 07:19:57.047865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (616597) - No such process 00:10:28.963 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 616597 00:10:28.963 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.963 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.963 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.221 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.221 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:29.221 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.221 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.221 delay0 00:10:29.221 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.221 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:29.221 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.221 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.221 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.221 07:19:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:29.221 [2024-11-26 07:19:57.230079] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:35.785 [2024-11-26 07:20:03.402208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbb820 is same with the state(6) to be set 00:10:35.785 Initializing NVMe Controllers 00:10:35.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:35.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:35.785 Initialization complete. Launching workers. 00:10:35.785 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:10:35.785 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:10:35.785 success 164, unsuccessful 192, failed 0 00:10:35.785 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:35.785 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:35.785 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.785 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:35.785 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.785 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:35.785 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.785 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.785 rmmod nvme_tcp 00:10:35.785 rmmod nvme_fabrics 00:10:35.785 rmmod nvme_keyring 00:10:35.785 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.785 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 614738 ']' 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 614738 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 614738 ']' 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 614738 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 614738 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 614738' 00:10:35.786 killing process with pid 614738 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 614738 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 614738 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.786 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.690 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.690 00:10:37.690 real 0m31.507s 00:10:37.690 user 0m42.526s 00:10:37.690 sys 0m10.891s 00:10:37.690 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.690 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.690 ************************************ 00:10:37.690 END TEST nvmf_zcopy 00:10:37.690 ************************************ 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.950 ************************************ 00:10:37.950 START TEST nvmf_nmic 00:10:37.950 ************************************ 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:37.950 * Looking for test storage... 00:10:37.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.950 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.950 --rc genhtml_branch_coverage=1 00:10:37.950 --rc genhtml_function_coverage=1 00:10:37.950 --rc genhtml_legend=1 00:10:37.950 --rc geninfo_all_blocks=1 00:10:37.950 --rc geninfo_unexecuted_blocks=1 00:10:37.950 00:10:37.950 ' 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.950 --rc genhtml_branch_coverage=1 00:10:37.950 --rc genhtml_function_coverage=1 00:10:37.950 --rc genhtml_legend=1 00:10:37.950 --rc geninfo_all_blocks=1 00:10:37.950 --rc geninfo_unexecuted_blocks=1 00:10:37.950 00:10:37.950 ' 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.950 --rc genhtml_branch_coverage=1 00:10:37.950 --rc genhtml_function_coverage=1 00:10:37.950 --rc genhtml_legend=1 00:10:37.950 --rc geninfo_all_blocks=1 00:10:37.950 --rc geninfo_unexecuted_blocks=1 00:10:37.950 00:10:37.950 ' 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.950 --rc genhtml_branch_coverage=1 00:10:37.950 --rc genhtml_function_coverage=1 00:10:37.950 --rc genhtml_legend=1 00:10:37.950 --rc geninfo_all_blocks=1 00:10:37.950 --rc geninfo_unexecuted_blocks=1 00:10:37.950 00:10:37.950 ' 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.950 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.951 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.210 07:20:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.481 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.481 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:43.482 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:43.482 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:43.482 Found net devices under 0000:86:00.0: cvl_0_0 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:43.482 Found net devices under 0000:86:00.1: cvl_0_1 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:43.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:10:43.482 00:10:43.482 --- 10.0.0.2 ping statistics --- 00:10:43.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.482 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:10:43.482 00:10:43.482 --- 10.0.0.1 ping statistics --- 00:10:43.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.482 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:43.482 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.483 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:43.483 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=622209 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 622209 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 622209 ']' 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.742 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.742 [2024-11-26 07:20:11.662713] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:10:43.742 [2024-11-26 07:20:11.662761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.742 [2024-11-26 07:20:11.730863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.742 [2024-11-26 07:20:11.774971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.742 [2024-11-26 07:20:11.775008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.742 [2024-11-26 07:20:11.775016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.742 [2024-11-26 07:20:11.775022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.742 [2024-11-26 07:20:11.775027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.742 [2024-11-26 07:20:11.776578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.742 [2024-11-26 07:20:11.776693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.742 [2024-11-26 07:20:11.776754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.742 [2024-11-26 07:20:11.776756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.001 [2024-11-26 07:20:11.913015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.001 Malloc0 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.001 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.002 [2024-11-26 07:20:11.977442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:44.002 test case1: single bdev can't be used in multiple subsystems 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.002 07:20:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.002 [2024-11-26 07:20:12.005364] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:44.002 [2024-11-26 07:20:12.005384] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:44.002 [2024-11-26 07:20:12.005391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.002 request: 00:10:44.002 { 00:10:44.002 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:44.002 "namespace": { 00:10:44.002 "bdev_name": "Malloc0", 00:10:44.002 "no_auto_visible": false 00:10:44.002 }, 00:10:44.002 "method": "nvmf_subsystem_add_ns", 00:10:44.002 "req_id": 1 00:10:44.002 } 00:10:44.002 Got JSON-RPC error response 00:10:44.002 response: 00:10:44.002 { 00:10:44.002 "code": -32602, 00:10:44.002 "message": "Invalid parameters" 00:10:44.002 } 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:44.002 Adding namespace failed - expected result. 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:44.002 test case2: host connect to nvmf target in multiple paths 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.002 [2024-11-26 07:20:12.017527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.002 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:45.380 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:46.317 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.317 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:46.317 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.317 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:46.317 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:48.850 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:48.850 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:48.850 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.850 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:48.850 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.850 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:48.850 07:20:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:48.850 [global] 00:10:48.850 thread=1 00:10:48.850 invalidate=1 00:10:48.850 rw=write 00:10:48.850 time_based=1 00:10:48.850 runtime=1 00:10:48.850 ioengine=libaio 00:10:48.850 direct=1 00:10:48.850 bs=4096 00:10:48.850 iodepth=1 00:10:48.850 norandommap=0 00:10:48.850 numjobs=1 00:10:48.850 00:10:48.850 verify_dump=1 00:10:48.850 verify_backlog=512 00:10:48.850 verify_state_save=0 00:10:48.850 do_verify=1 00:10:48.850 verify=crc32c-intel 00:10:48.850 [job0] 00:10:48.850 filename=/dev/nvme0n1 00:10:48.850 Could not set queue depth (nvme0n1) 00:10:48.851 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.851 fio-3.35 00:10:48.851 Starting 1 thread 00:10:49.787 00:10:49.787 job0: (groupid=0, jobs=1): err= 0: pid=623074: Tue Nov 26 07:20:17 2024 00:10:49.787 read: IOPS=944, BW=3777KiB/s (3868kB/s)(3928KiB/1040msec) 00:10:49.787 slat (nsec): min=7352, max=39035, avg=8584.16, stdev=2276.34 00:10:49.787 clat (usec): min=163, max=41131, avg=870.21, stdev=5161.29 00:10:49.787 lat (usec): min=172, max=41153, avg=878.80, stdev=5163.00 00:10:49.787 clat percentiles (usec): 00:10:49.787 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 178], 00:10:49.787 | 30.00th=[ 182], 40.00th=[ 204], 50.00th=[ 215], 60.00th=[ 219], 00:10:49.787 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 243], 95.00th=[ 255], 00:10:49.787 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:49.787 | 99.99th=[41157] 00:10:49.787 write: IOPS=984, BW=3938KiB/s (4033kB/s)(4096KiB/1040msec); 0 zone resets 00:10:49.787 slat (nsec): min=10145, max=45333, avg=11400.90, stdev=2232.29 00:10:49.787 clat (usec): min=119, max=510, avg=154.72, stdev=22.57 00:10:49.787 lat (usec): min=130, max=521, avg=166.12, stdev=23.07 00:10:49.787 clat percentiles (usec): 00:10:49.787 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 131], 00:10:49.787 | 30.00th=[ 137], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 165], 00:10:49.787 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 176], 95.00th=[ 180], 00:10:49.787 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 338], 99.95th=[ 510], 00:10:49.787 | 99.99th=[ 510] 00:10:49.787 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:49.787 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:49.787 lat (usec) : 250=96.11%, 500=3.04%, 750=0.05% 00:10:49.787 lat (msec) : 50=0.80% 00:10:49.787 cpu : usr=1.92%, sys=2.79%, ctx=2006, majf=0, minf=1 00:10:49.787 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.787 issued rwts: total=982,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.787 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.787 00:10:49.787 Run status group 0 (all jobs): 00:10:49.787 READ: bw=3777KiB/s (3868kB/s), 3777KiB/s-3777KiB/s (3868kB/s-3868kB/s), io=3928KiB (4022kB), run=1040-1040msec 00:10:49.787 WRITE: bw=3938KiB/s (4033kB/s), 3938KiB/s-3938KiB/s (4033kB/s-4033kB/s), io=4096KiB (4194kB), run=1040-1040msec 00:10:49.787 00:10:49.787 Disk stats (read/write): 00:10:49.787 nvme0n1: ios=1028/1024, merge=0/0, ticks=989/147, in_queue=1136, util=99.50% 00:10:49.787 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.049 07:20:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.049 rmmod nvme_tcp 00:10:50.049 rmmod nvme_fabrics 00:10:50.049 rmmod nvme_keyring 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 622209 ']' 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 622209 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 622209 ']' 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 622209 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 622209 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 622209' 00:10:50.049 killing process with pid 622209 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 622209 00:10:50.049 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 622209 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.310 07:20:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.847 00:10:52.847 real 0m14.545s 00:10:52.847 user 0m33.499s 00:10:52.847 sys 0m4.941s 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.847 ************************************ 00:10:52.847 END TEST nvmf_nmic 00:10:52.847 ************************************ 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.847 ************************************ 00:10:52.847 START TEST nvmf_fio_target 00:10:52.847 ************************************ 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:52.847 * Looking for test storage... 00:10:52.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:52.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.847 --rc genhtml_branch_coverage=1 00:10:52.847 --rc genhtml_function_coverage=1 00:10:52.847 --rc genhtml_legend=1 00:10:52.847 --rc geninfo_all_blocks=1 00:10:52.847 --rc geninfo_unexecuted_blocks=1 00:10:52.847 00:10:52.847 ' 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:52.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.847 --rc genhtml_branch_coverage=1 00:10:52.847 --rc genhtml_function_coverage=1 00:10:52.847 --rc genhtml_legend=1 00:10:52.847 --rc geninfo_all_blocks=1 00:10:52.847 --rc geninfo_unexecuted_blocks=1 00:10:52.847 00:10:52.847 ' 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:52.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.847 --rc genhtml_branch_coverage=1 00:10:52.847 --rc genhtml_function_coverage=1 00:10:52.847 --rc genhtml_legend=1 00:10:52.847 --rc geninfo_all_blocks=1 00:10:52.847 --rc geninfo_unexecuted_blocks=1 00:10:52.847 00:10:52.847 ' 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:52.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.847 --rc genhtml_branch_coverage=1 00:10:52.847 --rc genhtml_function_coverage=1 00:10:52.847 --rc genhtml_legend=1 00:10:52.847 --rc geninfo_all_blocks=1 00:10:52.847 --rc geninfo_unexecuted_blocks=1 00:10:52.847 00:10:52.847 ' 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.847 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.848 07:20:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.122 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:58.123 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:58.123 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:58.123 Found net devices under 0000:86:00.0: cvl_0_0 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:58.123 Found net devices under 0000:86:00.1: cvl_0_1 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:58.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:10:58.123 00:10:58.123 --- 10.0.0.2 ping statistics --- 00:10:58.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.123 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:10:58.123 00:10:58.123 --- 10.0.0.1 ping statistics --- 00:10:58.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.123 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=626834 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 626834 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 626834 ']' 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.123 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.123 [2024-11-26 07:20:25.975433] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:10:58.123 [2024-11-26 07:20:25.975480] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.123 [2024-11-26 07:20:26.043679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.123 [2024-11-26 07:20:26.086849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.123 [2024-11-26 07:20:26.086888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.123 [2024-11-26 07:20:26.086895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.124 [2024-11-26 07:20:26.086901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.124 [2024-11-26 07:20:26.086906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.124 [2024-11-26 07:20:26.088460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.124 [2024-11-26 07:20:26.088562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.124 [2024-11-26 07:20:26.088650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.124 [2024-11-26 07:20:26.088652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.124 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.124 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:58.124 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:58.124 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:58.124 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.383 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.383 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:58.383 [2024-11-26 07:20:26.398696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.383 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.642 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:58.642 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.901 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:58.901 07:20:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.159 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:59.160 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.418 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:59.418 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:59.419 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.678 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:59.678 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.936 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:59.936 07:20:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.193 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:00.193 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:00.450 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.707 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.707 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.707 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.707 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.964 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.222 [2024-11-26 07:20:29.129287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.222 07:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:01.480 07:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:01.480 07:20:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.856 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:02.856 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.856 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.856 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:02.856 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:02.856 07:20:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.762 07:20:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.762 07:20:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.762 07:20:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.762 07:20:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:04.762 07:20:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.762 07:20:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:04.762 07:20:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:04.762 [global] 00:11:04.762 thread=1 00:11:04.762 invalidate=1 00:11:04.762 rw=write 00:11:04.762 time_based=1 00:11:04.762 runtime=1 00:11:04.762 ioengine=libaio 00:11:04.762 direct=1 00:11:04.762 bs=4096 00:11:04.762 iodepth=1 00:11:04.762 norandommap=0 00:11:04.762 numjobs=1 00:11:04.762 00:11:04.762 verify_dump=1 00:11:04.762 verify_backlog=512 00:11:04.762 verify_state_save=0 00:11:04.762 do_verify=1 00:11:04.762 verify=crc32c-intel 00:11:04.762 [job0] 00:11:04.762 filename=/dev/nvme0n1 00:11:04.762 [job1] 00:11:04.762 filename=/dev/nvme0n2 00:11:04.763 [job2] 00:11:04.763 filename=/dev/nvme0n3 00:11:04.763 [job3] 00:11:04.763 filename=/dev/nvme0n4 00:11:04.763 Could not set queue depth (nvme0n1) 00:11:04.763 Could not set queue depth (nvme0n2) 00:11:04.763 Could not set queue depth (nvme0n3) 00:11:04.763 Could not set queue depth (nvme0n4) 00:11:05.022 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.022 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.022 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.022 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.022 fio-3.35 00:11:05.022 Starting 4 threads 00:11:06.401 00:11:06.401 job0: (groupid=0, jobs=1): err= 0: pid=628185: Tue Nov 26 07:20:34 2024 00:11:06.401 read: IOPS=521, BW=2085KiB/s (2135kB/s)(2112KiB/1013msec) 00:11:06.401 slat (nsec): min=6663, max=23196, avg=8150.88, stdev=2696.62 00:11:06.401 clat (usec): min=201, max=42002, avg=1513.63, stdev=7065.24 00:11:06.401 lat (usec): min=208, max=42025, avg=1521.78, stdev=7067.61 00:11:06.401 clat percentiles (usec): 00:11:06.401 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:11:06.401 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 255], 60.00th=[ 273], 00:11:06.401 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 330], 95.00th=[ 359], 00:11:06.401 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:06.401 | 99.99th=[42206] 00:11:06.401 write: IOPS=1010, BW=4043KiB/s (4140kB/s)(4096KiB/1013msec); 0 zone resets 00:11:06.401 slat (nsec): min=8950, max=52977, avg=10913.78, stdev=2197.72 00:11:06.401 clat (usec): min=119, max=294, avg=189.40, stdev=46.74 00:11:06.401 lat (usec): min=129, max=304, avg=200.31, stdev=46.64 00:11:06.401 clat percentiles (usec): 00:11:06.401 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:11:06.401 | 30.00th=[ 147], 40.00th=[ 155], 50.00th=[ 169], 60.00th=[ 208], 00:11:06.401 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 253], 00:11:06.401 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 293], 00:11:06.401 | 99.99th=[ 293] 00:11:06.401 bw ( KiB/s): min= 8192, max= 8192, per=44.75%, avg=8192.00, stdev= 0.00, samples=1 00:11:06.401 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:06.401 lat (usec) : 250=77.32%, 500=21.59%, 750=0.06% 00:11:06.401 lat (msec) : 50=1.03% 00:11:06.401 cpu : usr=1.38%, sys=1.48%, ctx=1553, majf=0, minf=2 00:11:06.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.401 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.401 job1: (groupid=0, jobs=1): err= 0: pid=628186: Tue Nov 26 07:20:34 2024 00:11:06.401 read: IOPS=351, BW=1406KiB/s (1440kB/s)(1436KiB/1021msec) 00:11:06.401 slat (nsec): min=6722, max=26083, avg=8423.59, stdev=3459.90 00:11:06.401 clat (usec): min=196, max=41975, avg=2557.54, stdev=9368.87 00:11:06.401 lat (usec): min=204, max=41997, avg=2565.96, stdev=9372.09 00:11:06.401 clat percentiles (usec): 00:11:06.401 | 1.00th=[ 204], 5.00th=[ 221], 10.00th=[ 243], 20.00th=[ 260], 00:11:06.401 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 297], 00:11:06.401 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 351], 95.00th=[40633], 00:11:06.401 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:06.401 | 99.99th=[42206] 00:11:06.401 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:11:06.401 slat (nsec): min=9766, max=46323, avg=12319.79, stdev=2746.27 00:11:06.401 clat (usec): min=128, max=362, avg=176.19, stdev=35.64 00:11:06.402 lat (usec): min=140, max=393, avg=188.51, stdev=36.02 00:11:06.402 clat percentiles (usec): 00:11:06.402 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:11:06.402 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 169], 00:11:06.402 | 70.00th=[ 184], 80.00th=[ 210], 90.00th=[ 239], 95.00th=[ 241], 00:11:06.402 | 99.00th=[ 249], 99.50th=[ 269], 99.90th=[ 363], 99.95th=[ 363], 00:11:06.402 | 99.99th=[ 363] 00:11:06.402 bw ( KiB/s): min= 4096, max= 4096, per=22.37%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.402 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.402 lat (usec) : 250=63.72%, 500=33.98% 00:11:06.402 lat (msec) : 50=2.30% 00:11:06.402 cpu : usr=0.69%, sys=0.69%, ctx=873, majf=0, minf=1 00:11:06.402 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.402 issued rwts: total=359,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.402 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.402 job2: (groupid=0, jobs=1): err= 0: pid=628188: Tue Nov 26 07:20:34 2024 00:11:06.402 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:06.402 slat (nsec): min=6651, max=28909, avg=7749.46, stdev=1134.23 00:11:06.402 clat (usec): min=164, max=382, avg=204.89, stdev=31.66 00:11:06.402 lat (usec): min=172, max=390, avg=212.64, stdev=31.80 00:11:06.402 clat percentiles (usec): 00:11:06.402 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 184], 00:11:06.402 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:11:06.402 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 241], 95.00th=[ 277], 00:11:06.402 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 355], 99.95th=[ 359], 00:11:06.402 | 99.99th=[ 383] 00:11:06.402 write: IOPS=2663, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:11:06.402 slat (nsec): min=9810, max=38975, avg=11184.83, stdev=1381.46 00:11:06.402 clat (usec): min=113, max=366, avg=155.23, stdev=34.58 00:11:06.402 lat (usec): min=129, max=405, avg=166.42, stdev=34.99 00:11:06.402 clat percentiles (usec): 00:11:06.402 | 1.00th=[ 125], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 133], 00:11:06.402 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:11:06.402 | 70.00th=[ 153], 80.00th=[ 178], 90.00th=[ 212], 95.00th=[ 229], 00:11:06.402 | 99.00th=[ 269], 99.50th=[ 293], 99.90th=[ 338], 99.95th=[ 351], 00:11:06.402 | 99.99th=[ 367] 00:11:06.402 bw ( KiB/s): min=11160, max=11160, per=60.96%, avg=11160.00, stdev= 0.00, samples=1 00:11:06.402 iops : min= 2790, max= 2790, avg=2790.00, stdev= 0.00, samples=1 00:11:06.402 lat (usec) : 250=95.52%, 500=4.48% 00:11:06.402 cpu : usr=2.80%, sys=4.90%, ctx=5228, majf=0, minf=1 00:11:06.402 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.402 issued rwts: total=2560,2666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.402 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.402 job3: (groupid=0, jobs=1): err= 0: pid=628193: Tue Nov 26 07:20:34 2024 00:11:06.402 read: IOPS=150, BW=602KiB/s (616kB/s)(620KiB/1030msec) 00:11:06.402 slat (nsec): min=7436, max=22524, avg=11098.01, stdev=4452.79 00:11:06.402 clat (usec): min=248, max=41997, avg=5866.33, stdev=14072.65 00:11:06.402 lat (usec): min=256, max=42019, avg=5877.43, stdev=14076.60 00:11:06.402 clat percentiles (usec): 00:11:06.402 | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 293], 00:11:06.402 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 322], 00:11:06.402 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[41157], 95.00th=[41157], 00:11:06.402 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:06.402 | 99.99th=[42206] 00:11:06.402 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:11:06.402 slat (nsec): min=10899, max=37282, avg=12840.86, stdev=2196.36 00:11:06.402 clat (usec): min=153, max=364, avg=213.29, stdev=30.07 00:11:06.402 lat (usec): min=165, max=375, avg=226.13, stdev=30.16 00:11:06.402 clat percentiles (usec): 00:11:06.402 | 1.00th=[ 157], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 192], 00:11:06.402 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:11:06.402 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 269], 00:11:06.402 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 363], 99.95th=[ 363], 00:11:06.402 | 99.99th=[ 363] 00:11:06.402 bw ( KiB/s): min= 4096, max= 4096, per=22.37%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.402 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.402 lat (usec) : 250=70.61%, 500=26.24% 00:11:06.402 lat (msec) : 50=3.15% 00:11:06.402 cpu : usr=0.49%, sys=1.07%, ctx=669, majf=0, minf=1 00:11:06.402 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.402 issued rwts: total=155,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.402 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.402 00:11:06.402 Run status group 0 (all jobs): 00:11:06.402 READ: bw=13.7MiB/s (14.3MB/s), 602KiB/s-9.99MiB/s (616kB/s-10.5MB/s), io=14.1MiB (14.8MB), run=1001-1030msec 00:11:06.402 WRITE: bw=17.9MiB/s (18.7MB/s), 1988KiB/s-10.4MiB/s (2036kB/s-10.9MB/s), io=18.4MiB (19.3MB), run=1001-1030msec 00:11:06.402 00:11:06.402 Disk stats (read/write): 00:11:06.402 nvme0n1: ios=574/1024, merge=0/0, ticks=662/189, in_queue=851, util=87.06% 00:11:06.402 nvme0n2: ios=403/512, merge=0/0, ticks=836/84, in_queue=920, util=89.84% 00:11:06.402 nvme0n3: ios=2105/2384, merge=0/0, ticks=679/361, in_queue=1040, util=93.54% 00:11:06.402 nvme0n4: ios=174/512, merge=0/0, ticks=1610/100, in_queue=1710, util=94.23% 00:11:06.402 07:20:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:06.402 [global] 00:11:06.402 thread=1 00:11:06.402 invalidate=1 00:11:06.402 rw=randwrite 00:11:06.402 time_based=1 00:11:06.402 runtime=1 00:11:06.402 ioengine=libaio 00:11:06.402 direct=1 00:11:06.402 bs=4096 00:11:06.402 iodepth=1 00:11:06.402 norandommap=0 00:11:06.402 numjobs=1 00:11:06.402 00:11:06.402 verify_dump=1 00:11:06.402 verify_backlog=512 00:11:06.402 verify_state_save=0 00:11:06.402 do_verify=1 00:11:06.402 verify=crc32c-intel 00:11:06.402 [job0] 00:11:06.402 filename=/dev/nvme0n1 00:11:06.402 [job1] 00:11:06.402 filename=/dev/nvme0n2 00:11:06.402 [job2] 00:11:06.402 filename=/dev/nvme0n3 00:11:06.402 [job3] 00:11:06.402 filename=/dev/nvme0n4 00:11:06.402 Could not set queue depth (nvme0n1) 00:11:06.402 Could not set queue depth (nvme0n2) 00:11:06.402 Could not set queue depth (nvme0n3) 00:11:06.402 Could not set queue depth (nvme0n4) 00:11:06.661 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.661 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.661 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.661 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:06.661 fio-3.35 00:11:06.661 Starting 4 threads 00:11:08.037 00:11:08.037 job0: (groupid=0, jobs=1): err= 0: pid=628569: Tue Nov 26 07:20:35 2024 00:11:08.038 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:08.038 slat (nsec): min=6524, max=30825, avg=7695.09, stdev=2092.31 00:11:08.038 clat (usec): min=190, max=41460, avg=388.73, stdev=1051.44 00:11:08.038 lat (usec): min=198, max=41467, avg=396.43, stdev=1051.43 00:11:08.038 clat percentiles (usec): 00:11:08.038 | 1.00th=[ 227], 5.00th=[ 253], 10.00th=[ 269], 20.00th=[ 297], 00:11:08.038 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 351], 60.00th=[ 379], 00:11:08.038 | 70.00th=[ 396], 80.00th=[ 416], 90.00th=[ 457], 95.00th=[ 498], 00:11:08.038 | 99.00th=[ 635], 99.50th=[ 644], 99.90th=[ 668], 99.95th=[41681], 00:11:08.038 | 99.99th=[41681] 00:11:08.038 write: IOPS=1762, BW=7049KiB/s (7218kB/s)(7056KiB/1001msec); 0 zone resets 00:11:08.038 slat (nsec): min=9052, max=58368, avg=10167.18, stdev=1825.14 00:11:08.038 clat (usec): min=129, max=451, avg=207.70, stdev=34.04 00:11:08.038 lat (usec): min=139, max=490, avg=217.87, stdev=34.32 00:11:08.038 clat percentiles (usec): 00:11:08.038 | 1.00th=[ 141], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 178], 00:11:08.038 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 208], 60.00th=[ 221], 00:11:08.038 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 255], 00:11:08.038 | 99.00th=[ 293], 99.50th=[ 318], 99.90th=[ 343], 99.95th=[ 453], 00:11:08.038 | 99.99th=[ 453] 00:11:08.038 bw ( KiB/s): min= 8192, max= 8192, per=26.12%, avg=8192.00, stdev= 0.00, samples=1 00:11:08.038 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:08.038 lat (usec) : 250=51.03%, 500=46.82%, 750=2.12% 00:11:08.038 lat (msec) : 50=0.03% 00:11:08.038 cpu : usr=1.90%, sys=2.70%, ctx=3300, majf=0, minf=2 00:11:08.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.038 issued rwts: total=1536,1764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.038 job1: (groupid=0, jobs=1): err= 0: pid=628580: Tue Nov 26 07:20:35 2024 00:11:08.038 read: IOPS=2220, BW=8883KiB/s (9096kB/s)(8892KiB/1001msec) 00:11:08.038 slat (nsec): min=6616, max=27594, avg=7539.48, stdev=884.92 00:11:08.038 clat (usec): min=194, max=514, avg=240.64, stdev=17.37 00:11:08.038 lat (usec): min=201, max=537, avg=248.18, stdev=17.44 00:11:08.038 clat percentiles (usec): 00:11:08.038 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:11:08.038 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:11:08.038 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 265], 00:11:08.038 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 400], 99.95th=[ 429], 00:11:08.038 | 99.99th=[ 515] 00:11:08.038 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:08.038 slat (nsec): min=9457, max=38733, avg=10512.56, stdev=1370.81 00:11:08.038 clat (usec): min=116, max=454, avg=160.73, stdev=25.46 00:11:08.038 lat (usec): min=127, max=465, avg=171.24, stdev=25.75 00:11:08.038 clat percentiles (usec): 00:11:08.038 | 1.00th=[ 125], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 145], 00:11:08.038 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:11:08.038 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 219], 00:11:08.038 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 396], 99.95th=[ 396], 00:11:08.038 | 99.99th=[ 457] 00:11:08.038 bw ( KiB/s): min=10616, max=10616, per=33.85%, avg=10616.00, stdev= 0.00, samples=1 00:11:08.038 iops : min= 2654, max= 2654, avg=2654.00, stdev= 0.00, samples=1 00:11:08.038 lat (usec) : 250=86.03%, 500=13.95%, 750=0.02% 00:11:08.038 cpu : usr=2.00%, sys=4.80%, ctx=4784, majf=0, minf=1 00:11:08.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.038 issued rwts: total=2223,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.038 job2: (groupid=0, jobs=1): err= 0: pid=628615: Tue Nov 26 07:20:35 2024 00:11:08.038 read: IOPS=1440, BW=5762KiB/s (5901kB/s)(5768KiB/1001msec) 00:11:08.038 slat (nsec): min=7665, max=25101, avg=8800.21, stdev=1242.36 00:11:08.038 clat (usec): min=218, max=41119, avg=443.47, stdev=1512.67 00:11:08.038 lat (usec): min=227, max=41129, avg=452.27, stdev=1512.69 00:11:08.038 clat percentiles (usec): 00:11:08.038 | 1.00th=[ 251], 5.00th=[ 285], 10.00th=[ 302], 20.00th=[ 322], 00:11:08.038 | 30.00th=[ 338], 40.00th=[ 355], 50.00th=[ 379], 60.00th=[ 404], 00:11:08.038 | 70.00th=[ 420], 80.00th=[ 441], 90.00th=[ 490], 95.00th=[ 529], 00:11:08.038 | 99.00th=[ 603], 99.50th=[ 611], 99.90th=[40633], 99.95th=[41157], 00:11:08.038 | 99.99th=[41157] 00:11:08.038 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:08.038 slat (nsec): min=10228, max=65508, avg=12925.37, stdev=4325.73 00:11:08.038 clat (usec): min=131, max=361, avg=207.49, stdev=35.10 00:11:08.038 lat (usec): min=143, max=394, avg=220.41, stdev=35.41 00:11:08.038 clat percentiles (usec): 00:11:08.038 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 176], 00:11:08.038 | 30.00th=[ 184], 40.00th=[ 196], 50.00th=[ 208], 60.00th=[ 219], 00:11:08.038 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 265], 00:11:08.038 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 334], 99.95th=[ 363], 00:11:08.038 | 99.99th=[ 363] 00:11:08.038 bw ( KiB/s): min= 7576, max= 7576, per=24.15%, avg=7576.00, stdev= 0.00, samples=1 00:11:08.038 iops : min= 1894, max= 1894, avg=1894.00, stdev= 0.00, samples=1 00:11:08.038 lat (usec) : 250=47.95%, 500=48.25%, 750=3.63%, 1000=0.03% 00:11:08.038 lat (msec) : 2=0.07%, 50=0.07% 00:11:08.038 cpu : usr=3.00%, sys=4.50%, ctx=2982, majf=0, minf=1 00:11:08.038 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.038 issued rwts: total=1442,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.038 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.038 job3: (groupid=0, jobs=1): err= 0: pid=628626: Tue Nov 26 07:20:35 2024 00:11:08.038 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:08.038 slat (nsec): min=5610, max=21554, avg=8337.90, stdev=1646.59 00:11:08.038 clat (usec): min=197, max=1035, avg=350.02, stdev=105.51 00:11:08.038 lat (usec): min=204, max=1044, avg=358.36, stdev=106.41 00:11:08.038 clat percentiles (usec): 00:11:08.038 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 243], 00:11:08.038 | 30.00th=[ 265], 40.00th=[ 310], 50.00th=[ 334], 60.00th=[ 359], 00:11:08.038 | 70.00th=[ 404], 80.00th=[ 453], 90.00th=[ 510], 95.00th=[ 537], 00:11:08.038 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 848], 99.95th=[ 1037], 00:11:08.038 | 99.99th=[ 1037] 00:11:08.038 write: IOPS=1987, BW=7948KiB/s (8139kB/s)(7956KiB/1001msec); 0 zone resets 00:11:08.038 slat (nsec): min=6828, max=66807, avg=11595.03, stdev=2548.35 00:11:08.038 clat (usec): min=136, max=348, avg=209.71, stdev=40.20 00:11:08.038 lat (usec): min=148, max=359, avg=221.30, stdev=40.92 00:11:08.038 clat percentiles (usec): 00:11:08.038 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 00:11:08.038 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 215], 60.00th=[ 223], 00:11:08.038 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 260], 95.00th=[ 289], 00:11:08.038 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 351], 00:11:08.038 | 99.99th=[ 351] 00:11:08.039 bw ( KiB/s): min= 8192, max= 8192, per=26.12%, avg=8192.00, stdev= 0.00, samples=1 00:11:08.039 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:08.039 lat (usec) : 250=59.86%, 500=35.15%, 750=4.94%, 1000=0.03% 00:11:08.039 lat (msec) : 2=0.03% 00:11:08.039 cpu : usr=2.10%, sys=5.70%, ctx=3526, majf=0, minf=1 00:11:08.039 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.039 issued rwts: total=1536,1989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.039 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.039 00:11:08.039 Run status group 0 (all jobs): 00:11:08.039 READ: bw=26.3MiB/s (27.6MB/s), 5762KiB/s-8883KiB/s (5901kB/s-9096kB/s), io=26.3MiB (27.6MB), run=1001-1001msec 00:11:08.039 WRITE: bw=30.6MiB/s (32.1MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=30.7MiB (32.1MB), run=1001-1001msec 00:11:08.039 00:11:08.039 Disk stats (read/write): 00:11:08.039 nvme0n1: ios=1202/1536, merge=0/0, ticks=464/312, in_queue=776, util=81.86% 00:11:08.039 nvme0n2: ios=1799/2048, merge=0/0, ticks=1380/318, in_queue=1698, util=97.32% 00:11:08.039 nvme0n3: ios=1052/1405, merge=0/0, ticks=1379/272, in_queue=1651, util=96.74% 00:11:08.039 nvme0n4: ios=1342/1536, merge=0/0, ticks=793/308, in_queue=1101, util=97.23% 00:11:08.039 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:08.039 [global] 00:11:08.039 thread=1 00:11:08.039 invalidate=1 00:11:08.039 rw=write 00:11:08.039 time_based=1 00:11:08.039 runtime=1 00:11:08.039 ioengine=libaio 00:11:08.039 direct=1 00:11:08.039 bs=4096 00:11:08.039 iodepth=128 00:11:08.039 norandommap=0 00:11:08.039 numjobs=1 00:11:08.039 00:11:08.039 verify_dump=1 00:11:08.039 verify_backlog=512 00:11:08.039 verify_state_save=0 00:11:08.039 do_verify=1 00:11:08.039 verify=crc32c-intel 00:11:08.039 [job0] 00:11:08.039 filename=/dev/nvme0n1 00:11:08.039 [job1] 00:11:08.039 filename=/dev/nvme0n2 00:11:08.039 [job2] 00:11:08.039 filename=/dev/nvme0n3 00:11:08.039 [job3] 00:11:08.039 filename=/dev/nvme0n4 00:11:08.039 Could not set queue depth (nvme0n1) 00:11:08.039 Could not set queue depth (nvme0n2) 00:11:08.039 Could not set queue depth (nvme0n3) 00:11:08.039 Could not set queue depth (nvme0n4) 00:11:08.297 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.297 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.297 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.297 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.297 fio-3.35 00:11:08.297 Starting 4 threads 00:11:09.696 00:11:09.696 job0: (groupid=0, jobs=1): err= 0: pid=629055: Tue Nov 26 07:20:37 2024 00:11:09.696 read: IOPS=3003, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1005msec) 00:11:09.696 slat (nsec): min=1079, max=20744k, avg=126030.43, stdev=936331.24 00:11:09.696 clat (usec): min=4075, max=43175, avg=16371.63, stdev=6258.63 00:11:09.696 lat (usec): min=4080, max=49616, avg=16497.67, stdev=6351.48 00:11:09.696 clat percentiles (usec): 00:11:09.696 | 1.00th=[ 6521], 5.00th=[ 9372], 10.00th=[11731], 20.00th=[12387], 00:11:09.696 | 30.00th=[13173], 40.00th=[13698], 50.00th=[13960], 60.00th=[14615], 00:11:09.696 | 70.00th=[16909], 80.00th=[22414], 90.00th=[25297], 95.00th=[27919], 00:11:09.696 | 99.00th=[37487], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:11:09.696 | 99.99th=[43254] 00:11:09.696 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:11:09.696 slat (nsec): min=1997, max=41430k, avg=177092.43, stdev=1150671.44 00:11:09.696 clat (usec): min=2323, max=58333, avg=22740.36, stdev=12602.75 00:11:09.696 lat (usec): min=2330, max=76651, avg=22917.46, stdev=12723.86 00:11:09.696 clat percentiles (usec): 00:11:09.696 | 1.00th=[ 4293], 5.00th=[ 6521], 10.00th=[ 9634], 20.00th=[12649], 00:11:09.696 | 30.00th=[14615], 40.00th=[15139], 50.00th=[20317], 60.00th=[21103], 00:11:09.696 | 70.00th=[25822], 80.00th=[34866], 90.00th=[42730], 95.00th=[49546], 00:11:09.696 | 99.00th=[54789], 99.50th=[57410], 99.90th=[58459], 99.95th=[58459], 00:11:09.696 | 99.99th=[58459] 00:11:09.696 bw ( KiB/s): min=10568, max=14008, per=18.23%, avg=12288.00, stdev=2432.45, samples=2 00:11:09.696 iops : min= 2642, max= 3502, avg=3072.00, stdev=608.11, samples=2 00:11:09.696 lat (msec) : 4=0.38%, 10=7.88%, 20=54.24%, 50=35.45%, 100=2.05% 00:11:09.696 cpu : usr=2.09%, sys=2.79%, ctx=318, majf=0, minf=1 00:11:09.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:09.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.696 issued rwts: total=3019,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.696 job1: (groupid=0, jobs=1): err= 0: pid=629072: Tue Nov 26 07:20:37 2024 00:11:09.696 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:11:09.696 slat (nsec): min=1541, max=9155.4k, avg=116786.97, stdev=653160.10 00:11:09.696 clat (usec): min=7339, max=34110, avg=14637.56, stdev=4197.53 00:11:09.696 lat (usec): min=7345, max=34119, avg=14754.34, stdev=4247.55 00:11:09.696 clat percentiles (usec): 00:11:09.696 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[10552], 20.00th=[11600], 00:11:09.696 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13173], 60.00th=[13829], 00:11:09.696 | 70.00th=[15139], 80.00th=[17171], 90.00th=[21627], 95.00th=[24249], 00:11:09.696 | 99.00th=[28181], 99.50th=[30016], 99.90th=[34341], 99.95th=[34341], 00:11:09.696 | 99.99th=[34341] 00:11:09.696 write: IOPS=3780, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1007msec); 0 zone resets 00:11:09.696 slat (usec): min=2, max=9801, avg=145.49, stdev=646.92 00:11:09.696 clat (usec): min=6282, max=56593, avg=19663.95, stdev=10125.33 00:11:09.696 lat (usec): min=6744, max=56597, avg=19809.44, stdev=10190.87 00:11:09.696 clat percentiles (usec): 00:11:09.696 | 1.00th=[ 8029], 5.00th=[10552], 10.00th=[11338], 20.00th=[11600], 00:11:09.696 | 30.00th=[11863], 40.00th=[13173], 50.00th=[16909], 60.00th=[20055], 00:11:09.696 | 70.00th=[21103], 80.00th=[26870], 90.00th=[36439], 95.00th=[41681], 00:11:09.696 | 99.00th=[47449], 99.50th=[50594], 99.90th=[56361], 99.95th=[56361], 00:11:09.696 | 99.99th=[56361] 00:11:09.696 bw ( KiB/s): min=13056, max=16384, per=21.83%, avg=14720.00, stdev=2353.25, samples=2 00:11:09.696 iops : min= 3264, max= 4096, avg=3680.00, stdev=588.31, samples=2 00:11:09.696 lat (msec) : 10=3.15%, 20=69.49%, 50=27.05%, 100=0.31% 00:11:09.696 cpu : usr=2.88%, sys=5.67%, ctx=462, majf=0, minf=1 00:11:09.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:09.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.696 issued rwts: total=3584,3807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.696 job2: (groupid=0, jobs=1): err= 0: pid=629093: Tue Nov 26 07:20:37 2024 00:11:09.696 read: IOPS=4525, BW=17.7MiB/s (18.5MB/s)(18.5MiB/1046msec) 00:11:09.696 slat (nsec): min=1310, max=14437k, avg=113805.83, stdev=851180.11 00:11:09.696 clat (usec): min=4151, max=63238, avg=14964.82, stdev=8339.22 00:11:09.696 lat (usec): min=4157, max=63240, avg=15078.63, stdev=8382.75 00:11:09.696 clat percentiles (usec): 00:11:09.696 | 1.00th=[ 5669], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11207], 00:11:09.696 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:11:09.696 | 70.00th=[14746], 80.00th=[17433], 90.00th=[20317], 95.00th=[30016], 00:11:09.696 | 99.00th=[58459], 99.50th=[61080], 99.90th=[63177], 99.95th=[63177], 00:11:09.696 | 99.99th=[63177] 00:11:09.696 write: IOPS=4894, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1046msec); 0 zone resets 00:11:09.696 slat (usec): min=2, max=9904, avg=85.44, stdev=453.05 00:11:09.696 clat (usec): min=1642, max=63242, avg=12063.76, stdev=4924.16 00:11:09.696 lat (usec): min=1658, max=63245, avg=12149.20, stdev=4952.53 00:11:09.696 clat percentiles (usec): 00:11:09.696 | 1.00th=[ 3785], 5.00th=[ 5932], 10.00th=[ 7504], 20.00th=[10159], 00:11:09.696 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:11:09.696 | 70.00th=[11994], 80.00th=[12125], 90.00th=[15533], 95.00th=[22414], 00:11:09.696 | 99.00th=[32900], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:11:09.696 | 99.99th=[63177] 00:11:09.696 bw ( KiB/s): min=19824, max=21120, per=30.36%, avg=20472.00, stdev=916.41, samples=2 00:11:09.696 iops : min= 4956, max= 5280, avg=5118.00, stdev=229.10, samples=2 00:11:09.696 lat (msec) : 2=0.02%, 4=0.78%, 10=11.73%, 20=79.37%, 50=6.82% 00:11:09.696 lat (msec) : 100=1.28% 00:11:09.696 cpu : usr=2.78%, sys=5.93%, ctx=598, majf=0, minf=2 00:11:09.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:09.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.696 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.696 job3: (groupid=0, jobs=1): err= 0: pid=629099: Tue Nov 26 07:20:37 2024 00:11:09.696 read: IOPS=5167, BW=20.2MiB/s (21.2MB/s)(20.4MiB/1009msec) 00:11:09.696 slat (nsec): min=1123, max=7704.6k, avg=92637.95, stdev=551369.27 00:11:09.696 clat (usec): min=4781, max=20777, avg=12079.00, stdev=1389.20 00:11:09.696 lat (usec): min=4799, max=20826, avg=12171.64, stdev=1451.11 00:11:09.696 clat percentiles (usec): 00:11:09.696 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11469], 00:11:09.696 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:11:09.696 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13566], 95.00th=[14484], 00:11:09.696 | 99.00th=[16188], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:11:09.696 | 99.99th=[20841] 00:11:09.696 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:11:09.696 slat (nsec): min=1963, max=7828.5k, avg=85417.27, stdev=453413.92 00:11:09.696 clat (usec): min=2002, max=20113, avg=11478.20, stdev=1992.11 00:11:09.696 lat (usec): min=2011, max=20122, avg=11563.62, stdev=1989.10 00:11:09.696 clat percentiles (usec): 00:11:09.696 | 1.00th=[ 4555], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[10421], 00:11:09.696 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:11:09.696 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12649], 95.00th=[13566], 00:11:09.696 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19792], 99.95th=[19792], 00:11:09.696 | 99.99th=[20055] 00:11:09.697 bw ( KiB/s): min=21904, max=22888, per=33.22%, avg=22396.00, stdev=695.79, samples=2 00:11:09.697 iops : min= 5476, max= 5722, avg=5599.00, stdev=173.95, samples=2 00:11:09.697 lat (msec) : 4=0.42%, 10=12.64%, 20=86.92%, 50=0.02% 00:11:09.697 cpu : usr=3.27%, sys=7.34%, ctx=475, majf=0, minf=2 00:11:09.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:09.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.697 issued rwts: total=5214,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.697 00:11:09.697 Run status group 0 (all jobs): 00:11:09.697 READ: bw=61.8MiB/s (64.8MB/s), 11.7MiB/s-20.2MiB/s (12.3MB/s-21.2MB/s), io=64.7MiB (67.8MB), run=1005-1046msec 00:11:09.697 WRITE: bw=65.8MiB/s (69.0MB/s), 11.9MiB/s-21.8MiB/s (12.5MB/s-22.9MB/s), io=68.9MiB (72.2MB), run=1005-1046msec 00:11:09.697 00:11:09.697 Disk stats (read/write): 00:11:09.697 nvme0n1: ios=2540/2560, merge=0/0, ticks=27437/33599, in_queue=61036, util=91.48% 00:11:09.697 nvme0n2: ios=3107/3240, merge=0/0, ticks=23185/29779, in_queue=52964, util=97.26% 00:11:09.697 nvme0n3: ios=4153/4279, merge=0/0, ticks=55568/49153, in_queue=104721, util=90.40% 00:11:09.697 nvme0n4: ios=4568/4608, merge=0/0, ticks=19880/17762, in_queue=37642, util=99.16% 00:11:09.697 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:09.697 [global] 00:11:09.697 thread=1 00:11:09.697 invalidate=1 00:11:09.697 rw=randwrite 00:11:09.697 time_based=1 00:11:09.697 runtime=1 00:11:09.697 ioengine=libaio 00:11:09.697 direct=1 00:11:09.697 bs=4096 00:11:09.697 iodepth=128 00:11:09.697 norandommap=0 00:11:09.697 numjobs=1 00:11:09.697 00:11:09.697 verify_dump=1 00:11:09.697 verify_backlog=512 00:11:09.697 verify_state_save=0 00:11:09.697 do_verify=1 00:11:09.697 verify=crc32c-intel 00:11:09.697 [job0] 00:11:09.697 filename=/dev/nvme0n1 00:11:09.697 [job1] 00:11:09.697 filename=/dev/nvme0n2 00:11:09.697 [job2] 00:11:09.697 filename=/dev/nvme0n3 00:11:09.697 [job3] 00:11:09.697 filename=/dev/nvme0n4 00:11:09.697 Could not set queue depth (nvme0n1) 00:11:09.697 Could not set queue depth (nvme0n2) 00:11:09.697 Could not set queue depth (nvme0n3) 00:11:09.697 Could not set queue depth (nvme0n4) 00:11:09.954 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.954 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.954 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.954 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.954 fio-3.35 00:11:09.954 Starting 4 threads 00:11:11.324 00:11:11.325 job0: (groupid=0, jobs=1): err= 0: pid=629523: Tue Nov 26 07:20:39 2024 00:11:11.325 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:11:11.325 slat (nsec): min=1123, max=15368k, avg=90736.43, stdev=659367.96 00:11:11.325 clat (usec): min=3723, max=26764, avg=11586.00, stdev=3055.20 00:11:11.325 lat (usec): min=3732, max=26792, avg=11676.74, stdev=3094.65 00:11:11.325 clat percentiles (usec): 00:11:11.325 | 1.00th=[ 4883], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[ 9765], 00:11:11.325 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10945], 00:11:11.325 | 70.00th=[11994], 80.00th=[13698], 90.00th=[15270], 95.00th=[18220], 00:11:11.325 | 99.00th=[21627], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:11:11.325 | 99.99th=[26870] 00:11:11.325 write: IOPS=5774, BW=22.6MiB/s (23.7MB/s)(22.8MiB/1011msec); 0 zone resets 00:11:11.325 slat (usec): min=2, max=8456, avg=70.26, stdev=446.66 00:11:11.325 clat (usec): min=1854, max=37858, avg=10774.33, stdev=3929.88 00:11:11.325 lat (usec): min=1867, max=37866, avg=10844.58, stdev=3955.09 00:11:11.325 clat percentiles (usec): 00:11:11.325 | 1.00th=[ 3032], 5.00th=[ 5342], 10.00th=[ 6915], 20.00th=[ 9372], 00:11:11.325 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:11:11.325 | 70.00th=[11076], 80.00th=[11863], 90.00th=[13960], 95.00th=[16712], 00:11:11.325 | 99.00th=[28967], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:11:11.325 | 99.99th=[38011] 00:11:11.325 bw ( KiB/s): min=22320, max=23368, per=32.62%, avg=22844.00, stdev=741.05, samples=2 00:11:11.325 iops : min= 5580, max= 5842, avg=5711.00, stdev=185.26, samples=2 00:11:11.325 lat (msec) : 2=0.03%, 4=1.32%, 10=27.66%, 20=68.66%, 50=2.33% 00:11:11.325 cpu : usr=4.65%, sys=7.23%, ctx=498, majf=0, minf=1 00:11:11.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:11.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.325 issued rwts: total=5632,5838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.325 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.325 job1: (groupid=0, jobs=1): err= 0: pid=629524: Tue Nov 26 07:20:39 2024 00:11:11.325 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:11:11.325 slat (nsec): min=1163, max=15390k, avg=129571.28, stdev=894875.02 00:11:11.325 clat (usec): min=3897, max=58744, avg=15283.78, stdev=7256.27 00:11:11.325 lat (usec): min=3955, max=58748, avg=15413.35, stdev=7336.68 00:11:11.325 clat percentiles (usec): 00:11:11.325 | 1.00th=[ 7177], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11207], 00:11:11.325 | 30.00th=[11469], 40.00th=[12125], 50.00th=[13173], 60.00th=[15008], 00:11:11.325 | 70.00th=[15926], 80.00th=[17433], 90.00th=[20579], 95.00th=[27132], 00:11:11.325 | 99.00th=[51643], 99.50th=[55313], 99.90th=[58983], 99.95th=[58983], 00:11:11.325 | 99.99th=[58983] 00:11:11.325 write: IOPS=3665, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1013msec); 0 zone resets 00:11:11.325 slat (usec): min=2, max=8865, avg=133.83, stdev=633.04 00:11:11.325 clat (usec): min=1633, max=61894, avg=19824.33, stdev=14648.35 00:11:11.325 lat (usec): min=1640, max=61911, avg=19958.16, stdev=14746.63 00:11:11.325 clat percentiles (usec): 00:11:11.325 | 1.00th=[ 3458], 5.00th=[ 5669], 10.00th=[ 8586], 20.00th=[10290], 00:11:11.325 | 30.00th=[10421], 40.00th=[10683], 50.00th=[13304], 60.00th=[18482], 00:11:11.325 | 70.00th=[21890], 80.00th=[25822], 90.00th=[50594], 95.00th=[55837], 00:11:11.325 | 99.00th=[58459], 99.50th=[60031], 99.90th=[60556], 99.95th=[60556], 00:11:11.325 | 99.99th=[62129] 00:11:11.325 bw ( KiB/s): min=13128, max=15600, per=20.51%, avg=14364.00, stdev=1747.97, samples=2 00:11:11.325 iops : min= 3282, max= 3900, avg=3591.00, stdev=436.99, samples=2 00:11:11.325 lat (msec) : 2=0.19%, 4=0.89%, 10=10.55%, 20=64.01%, 50=18.20% 00:11:11.325 lat (msec) : 100=6.15% 00:11:11.325 cpu : usr=2.17%, sys=4.35%, ctx=449, majf=0, minf=1 00:11:11.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:11.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.325 issued rwts: total=3584,3713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.325 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.325 job2: (groupid=0, jobs=1): err= 0: pid=629525: Tue Nov 26 07:20:39 2024 00:11:11.325 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:11:11.325 slat (nsec): min=1149, max=20257k, avg=136565.30, stdev=1002334.23 00:11:11.325 clat (usec): min=6793, max=50430, avg=17970.88, stdev=7509.24 00:11:11.325 lat (usec): min=6799, max=53395, avg=18107.45, stdev=7588.72 00:11:11.325 clat percentiles (usec): 00:11:11.325 | 1.00th=[ 7701], 5.00th=[10028], 10.00th=[11207], 20.00th=[13304], 00:11:11.325 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14615], 60.00th=[15795], 00:11:11.325 | 70.00th=[18744], 80.00th=[22676], 90.00th=[31851], 95.00th=[33817], 00:11:11.325 | 99.00th=[38011], 99.50th=[40109], 99.90th=[50070], 99.95th=[50070], 00:11:11.325 | 99.99th=[50594] 00:11:11.325 write: IOPS=3144, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1005msec); 0 zone resets 00:11:11.325 slat (usec): min=2, max=19339, avg=178.09, stdev=1090.74 00:11:11.325 clat (usec): min=3943, max=97937, avg=22745.38, stdev=16905.32 00:11:11.325 lat (usec): min=4486, max=97944, avg=22923.48, stdev=17022.62 00:11:11.325 clat percentiles (usec): 00:11:11.325 | 1.00th=[ 4948], 5.00th=[ 8094], 10.00th=[10159], 20.00th=[11469], 00:11:11.325 | 30.00th=[13435], 40.00th=[13960], 50.00th=[15664], 60.00th=[21627], 00:11:11.325 | 70.00th=[23200], 80.00th=[32113], 90.00th=[41157], 95.00th=[56361], 00:11:11.325 | 99.00th=[86508], 99.50th=[88605], 99.90th=[88605], 99.95th=[98042], 00:11:11.325 | 99.99th=[98042] 00:11:11.325 bw ( KiB/s): min= 8200, max=16376, per=17.55%, avg=12288.00, stdev=5781.31, samples=2 00:11:11.325 iops : min= 2050, max= 4094, avg=3072.00, stdev=1445.33, samples=2 00:11:11.325 lat (msec) : 4=0.02%, 10=6.34%, 20=57.32%, 50=32.73%, 100=3.59% 00:11:11.325 cpu : usr=2.09%, sys=3.49%, ctx=249, majf=0, minf=1 00:11:11.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:11.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.325 issued rwts: total=3072,3160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.325 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.325 job3: (groupid=0, jobs=1): err= 0: pid=629526: Tue Nov 26 07:20:39 2024 00:11:11.325 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec) 00:11:11.325 slat (nsec): min=1365, max=12555k, avg=108084.91, stdev=782953.54 00:11:11.325 clat (usec): min=4028, max=43010, avg=13007.10, stdev=3792.19 00:11:11.325 lat (usec): min=4035, max=43018, avg=13115.18, stdev=3864.20 00:11:11.325 clat percentiles (usec): 00:11:11.325 | 1.00th=[ 5407], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11338], 00:11:11.325 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12387], 00:11:11.325 | 70.00th=[12911], 80.00th=[14091], 90.00th=[16909], 95.00th=[19530], 00:11:11.325 | 99.00th=[28181], 99.50th=[37487], 99.90th=[43254], 99.95th=[43254], 00:11:11.325 | 99.99th=[43254] 00:11:11.325 write: IOPS=4962, BW=19.4MiB/s (20.3MB/s)(19.6MiB/1012msec); 0 zone resets 00:11:11.325 slat (usec): min=2, max=9468, avg=94.16, stdev=575.29 00:11:11.325 clat (usec): min=1686, max=50150, avg=13619.04, stdev=8685.96 00:11:11.325 lat (usec): min=1699, max=50157, avg=13713.20, stdev=8755.70 00:11:11.325 clat percentiles (usec): 00:11:11.325 | 1.00th=[ 3425], 5.00th=[ 5866], 10.00th=[ 7701], 20.00th=[10159], 00:11:11.325 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:11:11.325 | 70.00th=[11863], 80.00th=[12780], 90.00th=[18482], 95.00th=[40109], 00:11:11.325 | 99.00th=[46400], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:11:11.325 | 99.99th=[50070] 00:11:11.325 bw ( KiB/s): min=16440, max=22720, per=27.96%, avg=19580.00, stdev=4440.63, samples=2 00:11:11.325 iops : min= 4110, max= 5680, avg=4895.00, stdev=1110.16, samples=2 00:11:11.325 lat (msec) : 2=0.05%, 4=0.74%, 10=10.79%, 20=81.60%, 50=6.75% 00:11:11.325 lat (msec) : 100=0.07% 00:11:11.325 cpu : usr=3.86%, sys=6.03%, ctx=464, majf=0, minf=2 00:11:11.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:11.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.325 issued rwts: total=4608,5022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.325 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.325 00:11:11.325 Run status group 0 (all jobs): 00:11:11.325 READ: bw=65.2MiB/s (68.3MB/s), 11.9MiB/s-21.8MiB/s (12.5MB/s-22.8MB/s), io=66.0MiB (69.2MB), run=1005-1013msec 00:11:11.325 WRITE: bw=68.4MiB/s (71.7MB/s), 12.3MiB/s-22.6MiB/s (12.9MB/s-23.7MB/s), io=69.3MiB (72.6MB), run=1005-1013msec 00:11:11.325 00:11:11.325 Disk stats (read/write): 00:11:11.325 nvme0n1: ios=4660/5047, merge=0/0, ticks=45256/45217, in_queue=90473, util=94.19% 00:11:11.325 nvme0n2: ios=3060/3072, merge=0/0, ticks=45417/60361, in_queue=105778, util=98.27% 00:11:11.325 nvme0n3: ios=2618/2783, merge=0/0, ticks=23000/27569, in_queue=50569, util=98.23% 00:11:11.325 nvme0n4: ios=3864/4096, merge=0/0, ticks=48533/56439, in_queue=104972, util=96.44% 00:11:11.325 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:11.325 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=629662 00:11:11.325 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:11.325 07:20:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:11.325 [global] 00:11:11.325 thread=1 00:11:11.326 invalidate=1 00:11:11.326 rw=read 00:11:11.326 time_based=1 00:11:11.326 runtime=10 00:11:11.326 ioengine=libaio 00:11:11.326 direct=1 00:11:11.326 bs=4096 00:11:11.326 iodepth=1 00:11:11.326 norandommap=1 00:11:11.326 numjobs=1 00:11:11.326 00:11:11.326 [job0] 00:11:11.326 filename=/dev/nvme0n1 00:11:11.326 [job1] 00:11:11.326 filename=/dev/nvme0n2 00:11:11.326 [job2] 00:11:11.326 filename=/dev/nvme0n3 00:11:11.326 [job3] 00:11:11.326 filename=/dev/nvme0n4 00:11:11.326 Could not set queue depth (nvme0n1) 00:11:11.326 Could not set queue depth (nvme0n2) 00:11:11.326 Could not set queue depth (nvme0n3) 00:11:11.326 Could not set queue depth (nvme0n4) 00:11:11.326 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.326 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.326 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.326 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.326 fio-3.35 00:11:11.326 Starting 4 threads 00:11:14.601 07:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:14.601 07:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:14.601 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2285568, buflen=4096 00:11:14.601 fio: pid=629909, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.601 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=48517120, buflen=4096 00:11:14.601 fio: pid=629908, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.601 07:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.601 07:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:14.858 07:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.858 07:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:14.858 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6578176, buflen=4096 00:11:14.858 fio: pid=629904, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.858 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=37183488, buflen=4096 00:11:14.858 fio: pid=629907, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.858 07:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.858 07:20:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:15.116 00:11:15.116 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=629904: Tue Nov 26 07:20:42 2024 00:11:15.116 read: IOPS=505, BW=2022KiB/s (2071kB/s)(6424KiB/3177msec) 00:11:15.116 slat (usec): min=6, max=13765, avg=24.61, stdev=451.32 00:11:15.116 clat (usec): min=177, max=42250, avg=1937.70, stdev=8038.64 00:11:15.116 lat (usec): min=185, max=55992, avg=1962.31, stdev=8133.28 00:11:15.116 clat percentiles (usec): 00:11:15.116 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 249], 00:11:15.116 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:11:15.116 | 70.00th=[ 277], 80.00th=[ 306], 90.00th=[ 465], 95.00th=[ 502], 00:11:15.116 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:15.116 | 99.99th=[42206] 00:11:15.116 bw ( KiB/s): min= 93, max=12311, per=7.77%, avg=2131.33, stdev=4987.00, samples=6 00:11:15.116 iops : min= 23, max= 3077, avg=532.67, stdev=1246.46, samples=6 00:11:15.116 lat (usec) : 250=24.83%, 500=70.07%, 750=1.00% 00:11:15.116 lat (msec) : 50=4.04% 00:11:15.116 cpu : usr=0.44%, sys=0.72%, ctx=1610, majf=0, minf=1 00:11:15.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.116 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.116 issued rwts: total=1607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.116 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=629907: Tue Nov 26 07:20:42 2024 00:11:15.116 read: IOPS=2696, BW=10.5MiB/s (11.0MB/s)(35.5MiB/3367msec) 00:11:15.116 slat (usec): min=6, max=27594, avg=17.28, stdev=384.77 00:11:15.116 clat (usec): min=169, max=41949, avg=349.14, stdev=2092.25 00:11:15.116 lat (usec): min=177, max=41971, avg=366.42, stdev=2128.09 00:11:15.116 clat percentiles (usec): 00:11:15.116 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 231], 00:11:15.116 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:11:15.116 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 265], 00:11:15.116 | 99.00th=[ 277], 99.50th=[ 338], 99.90th=[41157], 99.95th=[41157], 00:11:15.116 | 99.99th=[42206] 00:11:15.116 bw ( KiB/s): min= 335, max=15520, per=37.83%, avg=10377.50, stdev=7510.98, samples=6 00:11:15.116 iops : min= 83, max= 3880, avg=2594.17, stdev=1877.89, samples=6 00:11:15.116 lat (usec) : 250=64.51%, 500=35.21% 00:11:15.116 lat (msec) : 50=0.26% 00:11:15.116 cpu : usr=1.54%, sys=4.34%, ctx=9085, majf=0, minf=2 00:11:15.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.116 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.116 issued rwts: total=9079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.116 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=629908: Tue Nov 26 07:20:42 2024 00:11:15.116 read: IOPS=4036, BW=15.8MiB/s (16.5MB/s)(46.3MiB/2935msec) 00:11:15.116 slat (usec): min=6, max=11257, avg= 9.84, stdev=124.82 00:11:15.116 clat (usec): min=181, max=40442, avg=233.83, stdev=372.79 00:11:15.116 lat (usec): min=189, max=40451, avg=243.67, stdev=393.77 00:11:15.116 clat percentiles (usec): 00:11:15.116 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:11:15.116 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:11:15.116 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 258], 00:11:15.116 | 99.00th=[ 277], 99.50th=[ 302], 99.90th=[ 486], 99.95th=[ 865], 00:11:15.116 | 99.99th=[ 4178] 00:11:15.116 bw ( KiB/s): min=14091, max=18064, per=59.66%, avg=16362.20, stdev=1445.42, samples=5 00:11:15.116 iops : min= 3522, max= 4516, avg=4090.40, stdev=361.65, samples=5 00:11:15.116 lat (usec) : 250=89.82%, 500=10.10%, 750=0.02%, 1000=0.01% 00:11:15.116 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 50=0.01% 00:11:15.116 cpu : usr=2.32%, sys=6.37%, ctx=11848, majf=0, minf=2 00:11:15.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.116 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.116 issued rwts: total=11846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.116 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=629909: Tue Nov 26 07:20:42 2024 00:11:15.116 read: IOPS=205, BW=820KiB/s (839kB/s)(2232KiB/2723msec) 00:11:15.116 slat (nsec): min=4551, max=37648, avg=10296.82, stdev=5331.39 00:11:15.116 clat (usec): min=175, max=42011, avg=4828.62, stdev=12923.04 00:11:15.116 lat (usec): min=182, max=42039, avg=4838.89, stdev=12927.79 00:11:15.117 clat percentiles (usec): 00:11:15.117 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:11:15.117 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:11:15.117 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[40633], 95.00th=[41157], 00:11:15.117 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:15.117 | 99.99th=[42206] 00:11:15.117 bw ( KiB/s): min= 96, max= 4015, per=3.22%, avg=883.00, stdev=1750.85, samples=5 00:11:15.117 iops : min= 24, max= 1003, avg=220.60, stdev=437.38, samples=5 00:11:15.117 lat (usec) : 250=85.15%, 500=3.22%, 750=0.18% 00:11:15.117 lat (msec) : 50=11.27% 00:11:15.117 cpu : usr=0.11%, sys=0.33%, ctx=561, majf=0, minf=2 00:11:15.117 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.117 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.117 issued rwts: total=559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.117 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.117 00:11:15.117 Run status group 0 (all jobs): 00:11:15.117 READ: bw=26.8MiB/s (28.1MB/s), 820KiB/s-15.8MiB/s (839kB/s-16.5MB/s), io=90.2MiB (94.6MB), run=2723-3367msec 00:11:15.117 00:11:15.117 Disk stats (read/write): 00:11:15.117 nvme0n1: ios=1604/0, merge=0/0, ticks=3010/0, in_queue=3010, util=94.98% 00:11:15.117 nvme0n2: ios=9078/0, merge=0/0, ticks=3040/0, in_queue=3040, util=94.14% 00:11:15.117 nvme0n3: ios=11620/0, merge=0/0, ticks=2599/0, in_queue=2599, util=95.94% 00:11:15.117 nvme0n4: ios=604/0, merge=0/0, ticks=3622/0, in_queue=3622, util=99.26% 00:11:15.117 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.117 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:15.373 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.373 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:15.630 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.630 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:15.888 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.888 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:15.888 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:15.888 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 629662 00:11:15.888 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:15.888 07:20:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.146 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.146 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:16.146 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:16.146 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.146 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:16.146 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.146 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:16.146 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:16.146 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:16.146 nvmf hotplug test: fio failed as expected 00:11:16.146 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.404 rmmod nvme_tcp 00:11:16.404 rmmod nvme_fabrics 00:11:16.404 rmmod nvme_keyring 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 626834 ']' 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 626834 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 626834 ']' 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 626834 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 626834 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 626834' 00:11:16.404 killing process with pid 626834 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 626834 00:11:16.404 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 626834 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.662 07:20:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.191 00:11:19.191 real 0m26.208s 00:11:19.191 user 1m46.950s 00:11:19.191 sys 0m8.327s 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.191 ************************************ 00:11:19.191 END TEST nvmf_fio_target 00:11:19.191 ************************************ 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.191 ************************************ 00:11:19.191 START TEST nvmf_bdevio 00:11:19.191 ************************************ 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:19.191 * Looking for test storage... 00:11:19.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.191 --rc genhtml_branch_coverage=1 00:11:19.191 --rc genhtml_function_coverage=1 00:11:19.191 --rc genhtml_legend=1 00:11:19.191 --rc geninfo_all_blocks=1 00:11:19.191 --rc geninfo_unexecuted_blocks=1 00:11:19.191 00:11:19.191 ' 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.191 --rc genhtml_branch_coverage=1 00:11:19.191 --rc genhtml_function_coverage=1 00:11:19.191 --rc genhtml_legend=1 00:11:19.191 --rc geninfo_all_blocks=1 00:11:19.191 --rc geninfo_unexecuted_blocks=1 00:11:19.191 00:11:19.191 ' 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.191 --rc genhtml_branch_coverage=1 00:11:19.191 --rc genhtml_function_coverage=1 00:11:19.191 --rc genhtml_legend=1 00:11:19.191 --rc geninfo_all_blocks=1 00:11:19.191 --rc geninfo_unexecuted_blocks=1 00:11:19.191 00:11:19.191 ' 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.191 --rc genhtml_branch_coverage=1 00:11:19.191 --rc genhtml_function_coverage=1 00:11:19.191 --rc genhtml_legend=1 00:11:19.191 --rc geninfo_all_blocks=1 00:11:19.191 --rc geninfo_unexecuted_blocks=1 00:11:19.191 00:11:19.191 ' 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.191 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.192 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:24.449 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.449 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:24.449 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:24.450 Found net devices under 0000:86:00.0: cvl_0_0 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:24.450 Found net devices under 0000:86:00.1: cvl_0_1 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.450 07:20:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:11:24.450 00:11:24.450 --- 10.0.0.2 ping statistics --- 00:11:24.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.450 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:24.450 00:11:24.450 --- 10.0.0.1 ping statistics --- 00:11:24.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.450 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=634149 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 634149 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 634149 ']' 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.450 [2024-11-26 07:20:52.244503] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:11:24.450 [2024-11-26 07:20:52.244545] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.450 [2024-11-26 07:20:52.309156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.450 [2024-11-26 07:20:52.352034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.450 [2024-11-26 07:20:52.352070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.450 [2024-11-26 07:20:52.352077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.450 [2024-11-26 07:20:52.352083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.450 [2024-11-26 07:20:52.352088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.450 [2024-11-26 07:20:52.353731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:24.450 [2024-11-26 07:20:52.353839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:24.450 [2024-11-26 07:20:52.353951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.450 [2024-11-26 07:20:52.353961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.450 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.451 [2024-11-26 07:20:52.489543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.451 Malloc0 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.451 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.451 [2024-11-26 07:20:52.543414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.708 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.708 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:24.708 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:24.708 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:24.708 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.708 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.708 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.708 { 00:11:24.708 "params": { 00:11:24.708 "name": "Nvme$subsystem", 00:11:24.708 "trtype": "$TEST_TRANSPORT", 00:11:24.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.708 "adrfam": "ipv4", 00:11:24.708 "trsvcid": "$NVMF_PORT", 00:11:24.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.708 "hdgst": ${hdgst:-false}, 00:11:24.708 "ddgst": ${ddgst:-false} 00:11:24.708 }, 00:11:24.708 "method": "bdev_nvme_attach_controller" 00:11:24.708 } 00:11:24.708 EOF 00:11:24.708 )") 00:11:24.708 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:24.708 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:24.708 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:24.708 07:20:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.708 "params": { 00:11:24.708 "name": "Nvme1", 00:11:24.708 "trtype": "tcp", 00:11:24.708 "traddr": "10.0.0.2", 00:11:24.708 "adrfam": "ipv4", 00:11:24.708 "trsvcid": "4420", 00:11:24.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.708 "hdgst": false, 00:11:24.708 "ddgst": false 00:11:24.708 }, 00:11:24.708 "method": "bdev_nvme_attach_controller" 00:11:24.708 }' 00:11:24.708 [2024-11-26 07:20:52.594099] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:11:24.708 [2024-11-26 07:20:52.594143] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634176 ] 00:11:24.708 [2024-11-26 07:20:52.656660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:24.708 [2024-11-26 07:20:52.700975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.708 [2024-11-26 07:20:52.701070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.708 [2024-11-26 07:20:52.701073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.966 I/O targets: 00:11:24.966 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:24.966 00:11:24.966 00:11:24.966 CUnit - A unit testing framework for C - Version 2.1-3 00:11:24.966 http://cunit.sourceforge.net/ 00:11:24.966 00:11:24.966 00:11:24.966 Suite: bdevio tests on: Nvme1n1 00:11:24.966 Test: blockdev write read block ...passed 00:11:24.966 Test: blockdev write zeroes read block ...passed 00:11:24.966 Test: blockdev write zeroes read no split ...passed 00:11:24.966 Test: blockdev write zeroes read split ...passed 00:11:24.966 Test: blockdev write zeroes read split partial ...passed 00:11:24.966 Test: blockdev reset ...[2024-11-26 07:20:53.007512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:24.966 [2024-11-26 07:20:53.007577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ab340 (9): Bad file descriptor 00:11:25.222 [2024-11-26 07:20:53.065206] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:25.222 passed 00:11:25.222 Test: blockdev write read 8 blocks ...passed 00:11:25.222 Test: blockdev write read size > 128k ...passed 00:11:25.222 Test: blockdev write read invalid size ...passed 00:11:25.222 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.222 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.222 Test: blockdev write read max offset ...passed 00:11:25.222 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.223 Test: blockdev writev readv 8 blocks ...passed 00:11:25.223 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.223 Test: blockdev writev readv block ...passed 00:11:25.223 Test: blockdev writev readv size > 128k ...passed 00:11:25.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.223 Test: blockdev comparev and writev ...[2024-11-26 07:20:53.274652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.223 [2024-11-26 07:20:53.274681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:25.223 [2024-11-26 07:20:53.274695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.223 [2024-11-26 07:20:53.274702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:25.223 [2024-11-26 07:20:53.274952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.223 [2024-11-26 07:20:53.274963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:25.223 [2024-11-26 07:20:53.274975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.223 [2024-11-26 07:20:53.274983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:25.223 [2024-11-26 07:20:53.275239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.223 [2024-11-26 07:20:53.275249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:25.223 [2024-11-26 07:20:53.275261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.223 [2024-11-26 07:20:53.275269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:25.223 [2024-11-26 07:20:53.275503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.223 [2024-11-26 07:20:53.275513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:25.223 [2024-11-26 07:20:53.275525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.223 [2024-11-26 07:20:53.275533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:25.223 passed 00:11:25.480 Test: blockdev nvme passthru rw ...passed 00:11:25.480 Test: blockdev nvme passthru vendor specific ...[2024-11-26 07:20:53.357282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.480 [2024-11-26 07:20:53.357300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:25.480 [2024-11-26 07:20:53.357407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.480 [2024-11-26 07:20:53.357416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:25.480 [2024-11-26 07:20:53.357516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.480 [2024-11-26 07:20:53.357526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:25.480 [2024-11-26 07:20:53.357627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.480 [2024-11-26 07:20:53.357636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:25.480 passed 00:11:25.480 Test: blockdev nvme admin passthru ...passed 00:11:25.480 Test: blockdev copy ...passed 00:11:25.480 00:11:25.480 Run Summary: Type Total Ran Passed Failed Inactive 00:11:25.480 suites 1 1 n/a 0 0 00:11:25.480 tests 23 23 23 0 0 00:11:25.480 asserts 152 152 152 0 n/a 00:11:25.480 00:11:25.480 Elapsed time = 1.033 seconds 00:11:25.480 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.480 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.481 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.481 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:25.481 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:25.481 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.481 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:25.481 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.481 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:25.481 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.481 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.481 rmmod nvme_tcp 00:11:25.481 rmmod nvme_fabrics 00:11:25.739 rmmod nvme_keyring 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 634149 ']' 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 634149 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 634149 ']' 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 634149 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634149 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634149' 00:11:25.739 killing process with pid 634149 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 634149 00:11:25.739 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 634149 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.998 07:20:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.900 07:20:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:27.900 00:11:27.900 real 0m9.176s 00:11:27.900 user 0m9.370s 00:11:27.900 sys 0m4.450s 00:11:27.900 07:20:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.900 07:20:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.900 ************************************ 00:11:27.900 END TEST nvmf_bdevio 00:11:27.900 ************************************ 00:11:27.900 07:20:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:27.900 00:11:27.900 real 4m27.655s 00:11:27.900 user 10m15.808s 00:11:27.900 sys 1m32.059s 00:11:27.900 07:20:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.900 07:20:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:27.900 ************************************ 00:11:27.900 END TEST nvmf_target_core 00:11:27.900 ************************************ 00:11:27.900 07:20:55 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:27.900 07:20:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.900 07:20:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.900 07:20:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:28.159 ************************************ 00:11:28.159 START TEST nvmf_target_extra 00:11:28.159 ************************************ 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:28.159 * Looking for test storage... 00:11:28.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.159 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:28.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.160 --rc genhtml_branch_coverage=1 00:11:28.160 --rc genhtml_function_coverage=1 00:11:28.160 --rc genhtml_legend=1 00:11:28.160 --rc geninfo_all_blocks=1 00:11:28.160 --rc geninfo_unexecuted_blocks=1 00:11:28.160 00:11:28.160 ' 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:28.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.160 --rc genhtml_branch_coverage=1 00:11:28.160 --rc genhtml_function_coverage=1 00:11:28.160 --rc genhtml_legend=1 00:11:28.160 --rc geninfo_all_blocks=1 00:11:28.160 --rc geninfo_unexecuted_blocks=1 00:11:28.160 00:11:28.160 ' 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:28.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.160 --rc genhtml_branch_coverage=1 00:11:28.160 --rc genhtml_function_coverage=1 00:11:28.160 --rc genhtml_legend=1 00:11:28.160 --rc geninfo_all_blocks=1 00:11:28.160 --rc geninfo_unexecuted_blocks=1 00:11:28.160 00:11:28.160 ' 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:28.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.160 --rc genhtml_branch_coverage=1 00:11:28.160 --rc genhtml_function_coverage=1 00:11:28.160 --rc genhtml_legend=1 00:11:28.160 --rc geninfo_all_blocks=1 00:11:28.160 --rc geninfo_unexecuted_blocks=1 00:11:28.160 00:11:28.160 ' 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.160 ************************************ 00:11:28.160 START TEST nvmf_example 00:11:28.160 ************************************ 00:11:28.160 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:28.419 * Looking for test storage... 00:11:28.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.419 --rc genhtml_branch_coverage=1 00:11:28.419 --rc genhtml_function_coverage=1 00:11:28.419 --rc genhtml_legend=1 00:11:28.419 --rc geninfo_all_blocks=1 00:11:28.419 --rc geninfo_unexecuted_blocks=1 00:11:28.419 00:11:28.419 ' 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.419 --rc genhtml_branch_coverage=1 00:11:28.419 --rc genhtml_function_coverage=1 00:11:28.419 --rc genhtml_legend=1 00:11:28.419 --rc geninfo_all_blocks=1 00:11:28.419 --rc geninfo_unexecuted_blocks=1 00:11:28.419 00:11:28.419 ' 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.419 --rc genhtml_branch_coverage=1 00:11:28.419 --rc genhtml_function_coverage=1 00:11:28.419 --rc genhtml_legend=1 00:11:28.419 --rc geninfo_all_blocks=1 00:11:28.419 --rc geninfo_unexecuted_blocks=1 00:11:28.419 00:11:28.419 ' 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:28.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.419 --rc genhtml_branch_coverage=1 00:11:28.419 --rc genhtml_function_coverage=1 00:11:28.419 --rc genhtml_legend=1 00:11:28.419 --rc geninfo_all_blocks=1 00:11:28.419 --rc geninfo_unexecuted_blocks=1 00:11:28.419 00:11:28.419 ' 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.419 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:28.420 07:20:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:33.689 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:33.689 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:33.689 Found net devices under 0000:86:00.0: cvl_0_0 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:33.689 Found net devices under 0000:86:00.1: cvl_0_1 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.689 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.948 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:11:33.949 00:11:33.949 --- 10.0.0.2 ping statistics --- 00:11:33.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.949 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:11:33.949 00:11:33.949 --- 10.0.0.1 ping statistics --- 00:11:33.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.949 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:33.949 07:21:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=638096 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 638096 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 638096 ']' 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.949 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.883 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.883 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:34.883 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:34.883 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:34.883 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.883 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.883 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.883 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.883 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.141 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:35.141 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.141 07:21:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:35.141 07:21:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:47.339 Initializing NVMe Controllers 00:11:47.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:47.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:47.339 Initialization complete. Launching workers. 00:11:47.339 ======================================================== 00:11:47.339 Latency(us) 00:11:47.339 Device Information : IOPS MiB/s Average min max 00:11:47.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18087.68 70.65 3539.27 696.17 15389.95 00:11:47.339 ======================================================== 00:11:47.339 Total : 18087.68 70.65 3539.27 696.17 15389.95 00:11:47.339 00:11:47.339 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:47.339 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:47.339 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.339 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:47.339 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.340 rmmod nvme_tcp 00:11:47.340 rmmod nvme_fabrics 00:11:47.340 rmmod nvme_keyring 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 638096 ']' 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 638096 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 638096 ']' 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 638096 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 638096 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 638096' 00:11:47.340 killing process with pid 638096 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 638096 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 638096 00:11:47.340 nvmf threads initialize successfully 00:11:47.340 bdev subsystem init successfully 00:11:47.340 created a nvmf target service 00:11:47.340 create targets's poll groups done 00:11:47.340 all subsystems of target started 00:11:47.340 nvmf target is running 00:11:47.340 all subsystems of target stopped 00:11:47.340 destroy targets's poll groups done 00:11:47.340 destroyed the nvmf target service 00:11:47.340 bdev subsystem finish successfully 00:11:47.340 nvmf threads destroy successfully 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.340 07:21:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.907 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.907 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:47.907 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.907 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.907 00:11:47.907 real 0m19.534s 00:11:47.907 user 0m46.439s 00:11:47.907 sys 0m5.819s 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:47.908 ************************************ 00:11:47.908 END TEST nvmf_example 00:11:47.908 ************************************ 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.908 ************************************ 00:11:47.908 START TEST nvmf_filesystem 00:11:47.908 ************************************ 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:47.908 * Looking for test storage... 00:11:47.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.908 07:21:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:47.908 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:47.908 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.168 --rc genhtml_branch_coverage=1 00:11:48.168 --rc genhtml_function_coverage=1 00:11:48.168 --rc genhtml_legend=1 00:11:48.168 --rc geninfo_all_blocks=1 00:11:48.168 --rc geninfo_unexecuted_blocks=1 00:11:48.168 00:11:48.168 ' 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.168 --rc genhtml_branch_coverage=1 00:11:48.168 --rc genhtml_function_coverage=1 00:11:48.168 --rc genhtml_legend=1 00:11:48.168 --rc geninfo_all_blocks=1 00:11:48.168 --rc geninfo_unexecuted_blocks=1 00:11:48.168 00:11:48.168 ' 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.168 --rc genhtml_branch_coverage=1 00:11:48.168 --rc genhtml_function_coverage=1 00:11:48.168 --rc genhtml_legend=1 00:11:48.168 --rc geninfo_all_blocks=1 00:11:48.168 --rc geninfo_unexecuted_blocks=1 00:11:48.168 00:11:48.168 ' 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.168 --rc genhtml_branch_coverage=1 00:11:48.168 --rc genhtml_function_coverage=1 00:11:48.168 --rc genhtml_legend=1 00:11:48.168 --rc geninfo_all_blocks=1 00:11:48.168 --rc geninfo_unexecuted_blocks=1 00:11:48.168 00:11:48.168 ' 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:48.168 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:48.169 #define SPDK_CONFIG_H 00:11:48.169 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:48.169 #define SPDK_CONFIG_APPS 1 00:11:48.169 #define SPDK_CONFIG_ARCH native 00:11:48.169 #undef SPDK_CONFIG_ASAN 00:11:48.169 #undef SPDK_CONFIG_AVAHI 00:11:48.169 #undef SPDK_CONFIG_CET 00:11:48.169 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:48.169 #define SPDK_CONFIG_COVERAGE 1 00:11:48.169 #define SPDK_CONFIG_CROSS_PREFIX 00:11:48.169 #undef SPDK_CONFIG_CRYPTO 00:11:48.169 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:48.169 #undef SPDK_CONFIG_CUSTOMOCF 00:11:48.169 #undef SPDK_CONFIG_DAOS 00:11:48.169 #define SPDK_CONFIG_DAOS_DIR 00:11:48.169 #define SPDK_CONFIG_DEBUG 1 00:11:48.169 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:48.169 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:48.169 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:48.169 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:48.169 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:48.169 #undef SPDK_CONFIG_DPDK_UADK 00:11:48.169 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:48.169 #define SPDK_CONFIG_EXAMPLES 1 00:11:48.169 #undef SPDK_CONFIG_FC 00:11:48.169 #define SPDK_CONFIG_FC_PATH 00:11:48.169 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:48.169 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:48.169 #define SPDK_CONFIG_FSDEV 1 00:11:48.169 #undef SPDK_CONFIG_FUSE 00:11:48.169 #undef SPDK_CONFIG_FUZZER 00:11:48.169 #define SPDK_CONFIG_FUZZER_LIB 00:11:48.169 #undef SPDK_CONFIG_GOLANG 00:11:48.169 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:48.169 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:48.169 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:48.169 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:48.169 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:48.169 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:48.169 #undef SPDK_CONFIG_HAVE_LZ4 00:11:48.169 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:48.169 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:48.169 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:48.169 #define SPDK_CONFIG_IDXD 1 00:11:48.169 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:48.169 #undef SPDK_CONFIG_IPSEC_MB 00:11:48.169 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:48.169 #define SPDK_CONFIG_ISAL 1 00:11:48.169 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:48.169 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:48.169 #define SPDK_CONFIG_LIBDIR 00:11:48.169 #undef SPDK_CONFIG_LTO 00:11:48.169 #define SPDK_CONFIG_MAX_LCORES 128 00:11:48.169 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:48.169 #define SPDK_CONFIG_NVME_CUSE 1 00:11:48.169 #undef SPDK_CONFIG_OCF 00:11:48.169 #define SPDK_CONFIG_OCF_PATH 00:11:48.169 #define SPDK_CONFIG_OPENSSL_PATH 00:11:48.169 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:48.169 #define SPDK_CONFIG_PGO_DIR 00:11:48.169 #undef SPDK_CONFIG_PGO_USE 00:11:48.169 #define SPDK_CONFIG_PREFIX /usr/local 00:11:48.169 #undef SPDK_CONFIG_RAID5F 00:11:48.169 #undef SPDK_CONFIG_RBD 00:11:48.169 #define SPDK_CONFIG_RDMA 1 00:11:48.169 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:48.169 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:48.169 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:48.169 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:48.169 #define SPDK_CONFIG_SHARED 1 00:11:48.169 #undef SPDK_CONFIG_SMA 00:11:48.169 #define SPDK_CONFIG_TESTS 1 00:11:48.169 #undef SPDK_CONFIG_TSAN 00:11:48.169 #define SPDK_CONFIG_UBLK 1 00:11:48.169 #define SPDK_CONFIG_UBSAN 1 00:11:48.169 #undef SPDK_CONFIG_UNIT_TESTS 00:11:48.169 #undef SPDK_CONFIG_URING 00:11:48.169 #define SPDK_CONFIG_URING_PATH 00:11:48.169 #undef SPDK_CONFIG_URING_ZNS 00:11:48.169 #undef SPDK_CONFIG_USDT 00:11:48.169 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:48.169 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:48.169 #define SPDK_CONFIG_VFIO_USER 1 00:11:48.169 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:48.169 #define SPDK_CONFIG_VHOST 1 00:11:48.169 #define SPDK_CONFIG_VIRTIO 1 00:11:48.169 #undef SPDK_CONFIG_VTUNE 00:11:48.169 #define SPDK_CONFIG_VTUNE_DIR 00:11:48.169 #define SPDK_CONFIG_WERROR 1 00:11:48.169 #define SPDK_CONFIG_WPDK_DIR 00:11:48.169 #undef SPDK_CONFIG_XNVME 00:11:48.169 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.169 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:48.170 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 640907 ]] 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 640907 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.AxSEus 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.AxSEus/tests/target /tmp/spdk.AxSEus 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189175382016 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:11:48.171 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6788579328 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97980825600 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1155072 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:48.172 * Looking for test storage... 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189175382016 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9003171840 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:48.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.172 --rc genhtml_branch_coverage=1 00:11:48.172 --rc genhtml_function_coverage=1 00:11:48.172 --rc genhtml_legend=1 00:11:48.172 --rc geninfo_all_blocks=1 00:11:48.172 --rc geninfo_unexecuted_blocks=1 00:11:48.172 00:11:48.172 ' 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:48.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.172 --rc genhtml_branch_coverage=1 00:11:48.172 --rc genhtml_function_coverage=1 00:11:48.172 --rc genhtml_legend=1 00:11:48.172 --rc geninfo_all_blocks=1 00:11:48.172 --rc geninfo_unexecuted_blocks=1 00:11:48.172 00:11:48.172 ' 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:48.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.172 --rc genhtml_branch_coverage=1 00:11:48.172 --rc genhtml_function_coverage=1 00:11:48.172 --rc genhtml_legend=1 00:11:48.172 --rc geninfo_all_blocks=1 00:11:48.172 --rc geninfo_unexecuted_blocks=1 00:11:48.172 00:11:48.172 ' 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:48.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.172 --rc genhtml_branch_coverage=1 00:11:48.172 --rc genhtml_function_coverage=1 00:11:48.172 --rc genhtml_legend=1 00:11:48.172 --rc geninfo_all_blocks=1 00:11:48.172 --rc geninfo_unexecuted_blocks=1 00:11:48.172 00:11:48.172 ' 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.172 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.173 07:21:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.430 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:53.431 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:53.431 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:53.431 Found net devices under 0000:86:00.0: cvl_0_0 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:53.431 Found net devices under 0000:86:00.1: cvl_0_1 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.431 07:21:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:11:53.431 00:11:53.431 --- 10.0.0.2 ping statistics --- 00:11:53.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.431 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:11:53.431 00:11:53.431 --- 10.0.0.1 ping statistics --- 00:11:53.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.431 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.431 ************************************ 00:11:53.431 START TEST nvmf_filesystem_no_in_capsule 00:11:53.431 ************************************ 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:53.431 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=643944 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 643944 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 643944 ']' 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.432 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.432 [2024-11-26 07:21:21.363143] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:11:53.432 [2024-11-26 07:21:21.363188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.432 [2024-11-26 07:21:21.431169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.432 [2024-11-26 07:21:21.473085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.432 [2024-11-26 07:21:21.473125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.432 [2024-11-26 07:21:21.473132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.432 [2024-11-26 07:21:21.473138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.432 [2024-11-26 07:21:21.473143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.432 [2024-11-26 07:21:21.474730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.432 [2024-11-26 07:21:21.474823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.432 [2024-11-26 07:21:21.474892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.432 [2024-11-26 07:21:21.474893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.691 [2024-11-26 07:21:21.619841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.691 Malloc1 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.691 [2024-11-26 07:21:21.778390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.691 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:53.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:53.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:53.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:53.692 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:53.950 { 00:11:53.950 "name": "Malloc1", 00:11:53.950 "aliases": [ 00:11:53.950 "4a6d2a94-1c3e-4d5e-b76e-c983f1fa751d" 00:11:53.950 ], 00:11:53.950 "product_name": "Malloc disk", 00:11:53.950 "block_size": 512, 00:11:53.950 "num_blocks": 1048576, 00:11:53.950 "uuid": "4a6d2a94-1c3e-4d5e-b76e-c983f1fa751d", 00:11:53.950 "assigned_rate_limits": { 00:11:53.950 "rw_ios_per_sec": 0, 00:11:53.950 "rw_mbytes_per_sec": 0, 00:11:53.950 "r_mbytes_per_sec": 0, 00:11:53.950 "w_mbytes_per_sec": 0 00:11:53.950 }, 00:11:53.950 "claimed": true, 00:11:53.950 "claim_type": "exclusive_write", 00:11:53.950 "zoned": false, 00:11:53.950 "supported_io_types": { 00:11:53.950 "read": true, 00:11:53.950 "write": true, 00:11:53.950 "unmap": true, 00:11:53.950 "flush": true, 00:11:53.950 "reset": true, 00:11:53.950 "nvme_admin": false, 00:11:53.950 "nvme_io": false, 00:11:53.950 "nvme_io_md": false, 00:11:53.950 "write_zeroes": true, 00:11:53.950 "zcopy": true, 00:11:53.950 "get_zone_info": false, 00:11:53.950 "zone_management": false, 00:11:53.950 "zone_append": false, 00:11:53.950 "compare": false, 00:11:53.950 "compare_and_write": false, 00:11:53.950 "abort": true, 00:11:53.950 "seek_hole": false, 00:11:53.950 "seek_data": false, 00:11:53.950 "copy": true, 00:11:53.950 "nvme_iov_md": false 00:11:53.950 }, 00:11:53.950 "memory_domains": [ 00:11:53.950 { 00:11:53.950 "dma_device_id": "system", 00:11:53.950 "dma_device_type": 1 00:11:53.950 }, 00:11:53.950 { 00:11:53.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.950 "dma_device_type": 2 00:11:53.950 } 00:11:53.950 ], 00:11:53.950 "driver_specific": {} 00:11:53.950 } 00:11:53.950 ]' 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:53.950 07:21:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.324 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.324 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:55.324 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.324 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:55.324 07:21:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:57.224 07:21:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:57.224 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:57.790 07:21:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.724 ************************************ 00:11:58.724 START TEST filesystem_ext4 00:11:58.724 ************************************ 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:58.724 07:21:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:58.724 mke2fs 1.47.0 (5-Feb-2023) 00:11:58.982 Discarding device blocks: 0/522240 done 00:11:58.982 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:58.982 Filesystem UUID: 3fcf031a-1140-4bd4-8da9-dae447a1ece2 00:11:58.982 Superblock backups stored on blocks: 00:11:58.982 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:58.982 00:11:58.982 Allocating group tables: 0/64 done 00:11:58.982 Writing inode tables: 0/64 done 00:11:58.982 Creating journal (8192 blocks): done 00:12:00.614 Writing superblocks and filesystem accounting information: 0/64 done 00:12:00.614 00:12:00.614 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:00.614 07:21:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 643944 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:07.174 00:12:07.174 real 0m7.802s 00:12:07.174 user 0m0.025s 00:12:07.174 sys 0m0.075s 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:07.174 ************************************ 00:12:07.174 END TEST filesystem_ext4 00:12:07.174 ************************************ 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.174 ************************************ 00:12:07.174 START TEST filesystem_btrfs 00:12:07.174 ************************************ 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:07.174 btrfs-progs v6.8.1 00:12:07.174 See https://btrfs.readthedocs.io for more information. 00:12:07.174 00:12:07.174 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:07.174 NOTE: several default settings have changed in version 5.15, please make sure 00:12:07.174 this does not affect your deployments: 00:12:07.174 - DUP for metadata (-m dup) 00:12:07.174 - enabled no-holes (-O no-holes) 00:12:07.174 - enabled free-space-tree (-R free-space-tree) 00:12:07.174 00:12:07.174 Label: (null) 00:12:07.174 UUID: f223baee-990d-4ef0-bdbe-981be21451de 00:12:07.174 Node size: 16384 00:12:07.174 Sector size: 4096 (CPU page size: 4096) 00:12:07.174 Filesystem size: 510.00MiB 00:12:07.174 Block group profiles: 00:12:07.174 Data: single 8.00MiB 00:12:07.174 Metadata: DUP 32.00MiB 00:12:07.174 System: DUP 8.00MiB 00:12:07.174 SSD detected: yes 00:12:07.174 Zoned device: no 00:12:07.174 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:07.174 Checksum: crc32c 00:12:07.174 Number of devices: 1 00:12:07.174 Devices: 00:12:07.174 ID SIZE PATH 00:12:07.174 1 510.00MiB /dev/nvme0n1p1 00:12:07.174 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:07.174 07:21:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 643944 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:08.110 00:12:08.110 real 0m1.313s 00:12:08.110 user 0m0.016s 00:12:08.110 sys 0m0.128s 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:08.110 ************************************ 00:12:08.110 END TEST filesystem_btrfs 00:12:08.110 ************************************ 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.110 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.110 ************************************ 00:12:08.110 START TEST filesystem_xfs 00:12:08.110 ************************************ 00:12:08.111 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:08.111 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:08.111 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:08.111 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:08.111 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:08.111 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:08.111 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:08.111 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:08.111 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:08.111 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:08.111 07:21:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:08.111 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:08.111 = sectsz=512 attr=2, projid32bit=1 00:12:08.111 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:08.111 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:08.111 data = bsize=4096 blocks=130560, imaxpct=25 00:12:08.111 = sunit=0 swidth=0 blks 00:12:08.111 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:08.111 log =internal log bsize=4096 blocks=16384, version=2 00:12:08.111 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:08.111 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:09.044 Discarding blocks...Done. 00:12:09.044 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:09.044 07:21:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:10.942 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:10.942 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:10.942 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:10.942 07:21:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:10.942 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:10.942 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:10.942 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 643944 00:12:10.942 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:10.942 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:10.942 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:10.942 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.201 00:12:11.201 real 0m3.046s 00:12:11.201 user 0m0.026s 00:12:11.201 sys 0m0.072s 00:12:11.201 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.201 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:11.201 ************************************ 00:12:11.201 END TEST filesystem_xfs 00:12:11.201 ************************************ 00:12:11.201 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 643944 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 643944 ']' 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 643944 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 643944 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.460 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 643944' 00:12:11.460 killing process with pid 643944 00:12:11.461 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 643944 00:12:11.461 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 643944 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:12.028 00:12:12.028 real 0m18.555s 00:12:12.028 user 1m13.023s 00:12:12.028 sys 0m1.459s 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.028 ************************************ 00:12:12.028 END TEST nvmf_filesystem_no_in_capsule 00:12:12.028 ************************************ 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.028 ************************************ 00:12:12.028 START TEST nvmf_filesystem_in_capsule 00:12:12.028 ************************************ 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=647170 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 647170 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 647170 ']' 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.028 07:21:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.028 [2024-11-26 07:21:39.991067] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:12:12.028 [2024-11-26 07:21:39.991110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.028 [2024-11-26 07:21:40.062742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.028 [2024-11-26 07:21:40.102023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.028 [2024-11-26 07:21:40.102077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.028 [2024-11-26 07:21:40.102085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.028 [2024-11-26 07:21:40.102091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.028 [2024-11-26 07:21:40.102096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.028 [2024-11-26 07:21:40.103722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.028 [2024-11-26 07:21:40.103816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.028 [2024-11-26 07:21:40.103904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.028 [2024-11-26 07:21:40.103905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.287 [2024-11-26 07:21:40.249137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.287 Malloc1 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.287 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.551 [2024-11-26 07:21:40.397062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.551 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:12.551 { 00:12:12.551 "name": "Malloc1", 00:12:12.551 "aliases": [ 00:12:12.552 "b025b162-ac4c-493e-a074-e17c5ceecb5b" 00:12:12.552 ], 00:12:12.552 "product_name": "Malloc disk", 00:12:12.552 "block_size": 512, 00:12:12.552 "num_blocks": 1048576, 00:12:12.552 "uuid": "b025b162-ac4c-493e-a074-e17c5ceecb5b", 00:12:12.552 "assigned_rate_limits": { 00:12:12.552 "rw_ios_per_sec": 0, 00:12:12.552 "rw_mbytes_per_sec": 0, 00:12:12.552 "r_mbytes_per_sec": 0, 00:12:12.552 "w_mbytes_per_sec": 0 00:12:12.552 }, 00:12:12.552 "claimed": true, 00:12:12.552 "claim_type": "exclusive_write", 00:12:12.552 "zoned": false, 00:12:12.552 "supported_io_types": { 00:12:12.552 "read": true, 00:12:12.552 "write": true, 00:12:12.552 "unmap": true, 00:12:12.552 "flush": true, 00:12:12.552 "reset": true, 00:12:12.552 "nvme_admin": false, 00:12:12.552 "nvme_io": false, 00:12:12.552 "nvme_io_md": false, 00:12:12.552 "write_zeroes": true, 00:12:12.552 "zcopy": true, 00:12:12.552 "get_zone_info": false, 00:12:12.552 "zone_management": false, 00:12:12.552 "zone_append": false, 00:12:12.552 "compare": false, 00:12:12.552 "compare_and_write": false, 00:12:12.552 "abort": true, 00:12:12.552 "seek_hole": false, 00:12:12.552 "seek_data": false, 00:12:12.552 "copy": true, 00:12:12.552 "nvme_iov_md": false 00:12:12.552 }, 00:12:12.552 "memory_domains": [ 00:12:12.552 { 00:12:12.552 "dma_device_id": "system", 00:12:12.552 "dma_device_type": 1 00:12:12.552 }, 00:12:12.552 { 00:12:12.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.552 "dma_device_type": 2 00:12:12.552 } 00:12:12.552 ], 00:12:12.552 "driver_specific": {} 00:12:12.552 } 00:12:12.552 ]' 00:12:12.552 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:12.552 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:12.552 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:12.552 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:12.552 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:12.553 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:12.553 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:12.553 07:21:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.937 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.937 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:13.937 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.937 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:13.937 07:21:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:15.836 07:21:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:16.095 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:16.661 07:21:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.595 ************************************ 00:12:17.595 START TEST filesystem_in_capsule_ext4 00:12:17.595 ************************************ 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:17.595 07:21:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:17.595 mke2fs 1.47.0 (5-Feb-2023) 00:12:17.595 Discarding device blocks: 0/522240 done 00:12:17.595 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:17.595 Filesystem UUID: 2c638e45-6423-45ad-9d66-894b5042c4a2 00:12:17.595 Superblock backups stored on blocks: 00:12:17.595 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:17.595 00:12:17.595 Allocating group tables: 0/64 done 00:12:17.595 Writing inode tables: 0/64 done 00:12:17.853 Creating journal (8192 blocks): done 00:12:18.370 Writing superblocks and filesystem accounting information: 0/64 done 00:12:18.370 00:12:18.370 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:18.370 07:21:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 647170 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:23.635 00:12:23.635 real 0m6.134s 00:12:23.635 user 0m0.028s 00:12:23.635 sys 0m0.067s 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:23.635 ************************************ 00:12:23.635 END TEST filesystem_in_capsule_ext4 00:12:23.635 ************************************ 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.635 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.893 ************************************ 00:12:23.893 START TEST filesystem_in_capsule_btrfs 00:12:23.893 ************************************ 00:12:23.893 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:23.893 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:23.894 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:23.894 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:23.894 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:23.894 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:23.894 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:23.894 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:23.894 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:23.894 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:23.894 07:21:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:24.152 btrfs-progs v6.8.1 00:12:24.152 See https://btrfs.readthedocs.io for more information. 00:12:24.152 00:12:24.152 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:24.152 NOTE: several default settings have changed in version 5.15, please make sure 00:12:24.152 this does not affect your deployments: 00:12:24.152 - DUP for metadata (-m dup) 00:12:24.152 - enabled no-holes (-O no-holes) 00:12:24.152 - enabled free-space-tree (-R free-space-tree) 00:12:24.152 00:12:24.152 Label: (null) 00:12:24.152 UUID: de9e32c8-c787-4b37-a0e6-72905ad9ce67 00:12:24.152 Node size: 16384 00:12:24.152 Sector size: 4096 (CPU page size: 4096) 00:12:24.152 Filesystem size: 510.00MiB 00:12:24.152 Block group profiles: 00:12:24.152 Data: single 8.00MiB 00:12:24.152 Metadata: DUP 32.00MiB 00:12:24.152 System: DUP 8.00MiB 00:12:24.152 SSD detected: yes 00:12:24.152 Zoned device: no 00:12:24.152 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:24.152 Checksum: crc32c 00:12:24.152 Number of devices: 1 00:12:24.152 Devices: 00:12:24.152 ID SIZE PATH 00:12:24.152 1 510.00MiB /dev/nvme0n1p1 00:12:24.152 00:12:24.152 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:24.152 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.087 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.087 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:25.087 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.087 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:25.087 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:25.087 07:21:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 647170 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.087 00:12:25.087 real 0m1.281s 00:12:25.087 user 0m0.020s 00:12:25.087 sys 0m0.122s 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:25.087 ************************************ 00:12:25.087 END TEST filesystem_in_capsule_btrfs 00:12:25.087 ************************************ 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.087 ************************************ 00:12:25.087 START TEST filesystem_in_capsule_xfs 00:12:25.087 ************************************ 00:12:25.087 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:25.088 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:25.088 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.088 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:25.088 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:25.088 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:25.088 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:25.088 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:25.088 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:25.088 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:25.088 07:21:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:25.088 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:25.088 = sectsz=512 attr=2, projid32bit=1 00:12:25.088 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:25.088 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:25.088 data = bsize=4096 blocks=130560, imaxpct=25 00:12:25.088 = sunit=0 swidth=0 blks 00:12:25.088 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:25.088 log =internal log bsize=4096 blocks=16384, version=2 00:12:25.088 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:25.088 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:26.461 Discarding blocks...Done. 00:12:26.461 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:26.461 07:21:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:28.990 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:28.990 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:28.990 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:28.990 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:28.990 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:28.990 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:28.990 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 647170 00:12:28.990 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:28.990 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:28.991 00:12:28.991 real 0m3.543s 00:12:28.991 user 0m0.027s 00:12:28.991 sys 0m0.071s 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:28.991 ************************************ 00:12:28.991 END TEST filesystem_in_capsule_xfs 00:12:28.991 ************************************ 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 647170 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 647170 ']' 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 647170 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 647170 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 647170' 00:12:28.991 killing process with pid 647170 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 647170 00:12:28.991 07:21:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 647170 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:29.249 00:12:29.249 real 0m17.241s 00:12:29.249 user 1m7.856s 00:12:29.249 sys 0m1.406s 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.249 ************************************ 00:12:29.249 END TEST nvmf_filesystem_in_capsule 00:12:29.249 ************************************ 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.249 rmmod nvme_tcp 00:12:29.249 rmmod nvme_fabrics 00:12:29.249 rmmod nvme_keyring 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:29.249 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:29.250 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:29.250 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:29.250 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:29.250 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:29.250 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.250 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.250 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.250 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.250 07:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.781 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:31.781 00:12:31.781 real 0m43.509s 00:12:31.781 user 2m22.456s 00:12:31.781 sys 0m6.892s 00:12:31.781 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.781 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:31.781 ************************************ 00:12:31.781 END TEST nvmf_filesystem 00:12:31.781 ************************************ 00:12:31.781 07:21:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:31.781 07:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:31.781 07:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.781 07:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.781 ************************************ 00:12:31.781 START TEST nvmf_target_discovery 00:12:31.782 ************************************ 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:31.782 * Looking for test storage... 00:12:31.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:31.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.782 --rc genhtml_branch_coverage=1 00:12:31.782 --rc genhtml_function_coverage=1 00:12:31.782 --rc genhtml_legend=1 00:12:31.782 --rc geninfo_all_blocks=1 00:12:31.782 --rc geninfo_unexecuted_blocks=1 00:12:31.782 00:12:31.782 ' 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:31.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.782 --rc genhtml_branch_coverage=1 00:12:31.782 --rc genhtml_function_coverage=1 00:12:31.782 --rc genhtml_legend=1 00:12:31.782 --rc geninfo_all_blocks=1 00:12:31.782 --rc geninfo_unexecuted_blocks=1 00:12:31.782 00:12:31.782 ' 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:31.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.782 --rc genhtml_branch_coverage=1 00:12:31.782 --rc genhtml_function_coverage=1 00:12:31.782 --rc genhtml_legend=1 00:12:31.782 --rc geninfo_all_blocks=1 00:12:31.782 --rc geninfo_unexecuted_blocks=1 00:12:31.782 00:12:31.782 ' 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:31.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.782 --rc genhtml_branch_coverage=1 00:12:31.782 --rc genhtml_function_coverage=1 00:12:31.782 --rc genhtml_legend=1 00:12:31.782 --rc geninfo_all_blocks=1 00:12:31.782 --rc geninfo_unexecuted_blocks=1 00:12:31.782 00:12:31.782 ' 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.782 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:31.783 07:21:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:37.047 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:37.047 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:37.047 Found net devices under 0000:86:00.0: cvl_0_0 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:37.047 Found net devices under 0000:86:00.1: cvl_0_1 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:37.047 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:37.048 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.048 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.048 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:37.048 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:37.048 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.048 07:22:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.048 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.048 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.048 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:37.048 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:37.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:12:37.306 00:12:37.306 --- 10.0.0.2 ping statistics --- 00:12:37.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.306 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:12:37.306 00:12:37.306 --- 10.0.0.1 ping statistics --- 00:12:37.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.306 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:37.306 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=653672 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 653672 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 653672 ']' 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.307 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.307 [2024-11-26 07:22:05.285182] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:12:37.307 [2024-11-26 07:22:05.285226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.307 [2024-11-26 07:22:05.350690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.307 [2024-11-26 07:22:05.396097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.307 [2024-11-26 07:22:05.396132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.307 [2024-11-26 07:22:05.396139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.307 [2024-11-26 07:22:05.396145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.307 [2024-11-26 07:22:05.396150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.307 [2024-11-26 07:22:05.397732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.307 [2024-11-26 07:22:05.397847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.307 [2024-11-26 07:22:05.397866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.307 [2024-11-26 07:22:05.397868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 [2024-11-26 07:22:05.534402] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 Null1 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 [2024-11-26 07:22:05.579866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 Null2 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 Null3 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.566 Null4 00:12:37.566 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.824 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:37.824 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.824 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.824 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.824 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.825 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:37.825 00:12:37.825 Discovery Log Number of Records 6, Generation counter 6 00:12:37.825 =====Discovery Log Entry 0====== 00:12:37.825 trtype: tcp 00:12:37.825 adrfam: ipv4 00:12:37.825 subtype: current discovery subsystem 00:12:37.825 treq: not required 00:12:37.825 portid: 0 00:12:37.825 trsvcid: 4420 00:12:37.825 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:37.825 traddr: 10.0.0.2 00:12:37.825 eflags: explicit discovery connections, duplicate discovery information 00:12:37.825 sectype: none 00:12:37.825 =====Discovery Log Entry 1====== 00:12:37.825 trtype: tcp 00:12:37.825 adrfam: ipv4 00:12:37.825 subtype: nvme subsystem 00:12:37.825 treq: not required 00:12:37.825 portid: 0 00:12:37.825 trsvcid: 4420 00:12:37.825 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:37.825 traddr: 10.0.0.2 00:12:37.825 eflags: none 00:12:37.825 sectype: none 00:12:37.825 =====Discovery Log Entry 2====== 00:12:37.825 trtype: tcp 00:12:37.825 adrfam: ipv4 00:12:37.825 subtype: nvme subsystem 00:12:37.825 treq: not required 00:12:37.825 portid: 0 00:12:37.825 trsvcid: 4420 00:12:37.825 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:37.825 traddr: 10.0.0.2 00:12:37.825 eflags: none 00:12:37.825 sectype: none 00:12:37.825 =====Discovery Log Entry 3====== 00:12:37.825 trtype: tcp 00:12:37.825 adrfam: ipv4 00:12:37.825 subtype: nvme subsystem 00:12:37.825 treq: not required 00:12:37.825 portid: 0 00:12:37.825 trsvcid: 4420 00:12:37.825 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:37.825 traddr: 10.0.0.2 00:12:37.825 eflags: none 00:12:37.825 sectype: none 00:12:37.825 =====Discovery Log Entry 4====== 00:12:37.825 trtype: tcp 00:12:37.825 adrfam: ipv4 00:12:37.825 subtype: nvme subsystem 00:12:37.825 treq: not required 00:12:37.825 portid: 0 00:12:37.825 trsvcid: 4420 00:12:37.825 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:37.825 traddr: 10.0.0.2 00:12:37.825 eflags: none 00:12:37.825 sectype: none 00:12:37.825 =====Discovery Log Entry 5====== 00:12:37.825 trtype: tcp 00:12:37.825 adrfam: ipv4 00:12:37.825 subtype: discovery subsystem referral 00:12:37.825 treq: not required 00:12:37.825 portid: 0 00:12:37.825 trsvcid: 4430 00:12:37.825 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:37.825 traddr: 10.0.0.2 00:12:37.825 eflags: none 00:12:37.825 sectype: none 00:12:38.083 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:38.083 Perform nvmf subsystem discovery via RPC 00:12:38.083 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:38.083 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.083 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.083 [ 00:12:38.083 { 00:12:38.083 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:38.083 "subtype": "Discovery", 00:12:38.083 "listen_addresses": [ 00:12:38.083 { 00:12:38.083 "trtype": "TCP", 00:12:38.083 "adrfam": "IPv4", 00:12:38.083 "traddr": "10.0.0.2", 00:12:38.083 "trsvcid": "4420" 00:12:38.083 } 00:12:38.083 ], 00:12:38.083 "allow_any_host": true, 00:12:38.083 "hosts": [] 00:12:38.083 }, 00:12:38.083 { 00:12:38.083 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:38.083 "subtype": "NVMe", 00:12:38.083 "listen_addresses": [ 00:12:38.083 { 00:12:38.083 "trtype": "TCP", 00:12:38.083 "adrfam": "IPv4", 00:12:38.083 "traddr": "10.0.0.2", 00:12:38.083 "trsvcid": "4420" 00:12:38.083 } 00:12:38.083 ], 00:12:38.083 "allow_any_host": true, 00:12:38.083 "hosts": [], 00:12:38.083 "serial_number": "SPDK00000000000001", 00:12:38.083 "model_number": "SPDK bdev Controller", 00:12:38.083 "max_namespaces": 32, 00:12:38.083 "min_cntlid": 1, 00:12:38.083 "max_cntlid": 65519, 00:12:38.083 "namespaces": [ 00:12:38.083 { 00:12:38.083 "nsid": 1, 00:12:38.083 "bdev_name": "Null1", 00:12:38.083 "name": "Null1", 00:12:38.083 "nguid": "94C6CBE2E31D48CAA788DDD861A43B03", 00:12:38.083 "uuid": "94c6cbe2-e31d-48ca-a788-ddd861a43b03" 00:12:38.083 } 00:12:38.083 ] 00:12:38.083 }, 00:12:38.083 { 00:12:38.083 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:38.083 "subtype": "NVMe", 00:12:38.083 "listen_addresses": [ 00:12:38.083 { 00:12:38.083 "trtype": "TCP", 00:12:38.083 "adrfam": "IPv4", 00:12:38.083 "traddr": "10.0.0.2", 00:12:38.083 "trsvcid": "4420" 00:12:38.083 } 00:12:38.083 ], 00:12:38.083 "allow_any_host": true, 00:12:38.083 "hosts": [], 00:12:38.083 "serial_number": "SPDK00000000000002", 00:12:38.083 "model_number": "SPDK bdev Controller", 00:12:38.083 "max_namespaces": 32, 00:12:38.083 "min_cntlid": 1, 00:12:38.083 "max_cntlid": 65519, 00:12:38.083 "namespaces": [ 00:12:38.083 { 00:12:38.083 "nsid": 1, 00:12:38.083 "bdev_name": "Null2", 00:12:38.083 "name": "Null2", 00:12:38.083 "nguid": "1EA387BDC47146C38C5729A94C147F52", 00:12:38.083 "uuid": "1ea387bd-c471-46c3-8c57-29a94c147f52" 00:12:38.083 } 00:12:38.083 ] 00:12:38.083 }, 00:12:38.083 { 00:12:38.083 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:38.083 "subtype": "NVMe", 00:12:38.083 "listen_addresses": [ 00:12:38.083 { 00:12:38.083 "trtype": "TCP", 00:12:38.083 "adrfam": "IPv4", 00:12:38.083 "traddr": "10.0.0.2", 00:12:38.083 "trsvcid": "4420" 00:12:38.083 } 00:12:38.083 ], 00:12:38.083 "allow_any_host": true, 00:12:38.083 "hosts": [], 00:12:38.083 "serial_number": "SPDK00000000000003", 00:12:38.083 "model_number": "SPDK bdev Controller", 00:12:38.083 "max_namespaces": 32, 00:12:38.083 "min_cntlid": 1, 00:12:38.083 "max_cntlid": 65519, 00:12:38.083 "namespaces": [ 00:12:38.083 { 00:12:38.083 "nsid": 1, 00:12:38.083 "bdev_name": "Null3", 00:12:38.083 "name": "Null3", 00:12:38.083 "nguid": "19BB6C72522D45B4993918F85750BEFC", 00:12:38.083 "uuid": "19bb6c72-522d-45b4-9939-18f85750befc" 00:12:38.083 } 00:12:38.084 ] 00:12:38.084 }, 00:12:38.084 { 00:12:38.084 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:38.084 "subtype": "NVMe", 00:12:38.084 "listen_addresses": [ 00:12:38.084 { 00:12:38.084 "trtype": "TCP", 00:12:38.084 "adrfam": "IPv4", 00:12:38.084 "traddr": "10.0.0.2", 00:12:38.084 "trsvcid": "4420" 00:12:38.084 } 00:12:38.084 ], 00:12:38.084 "allow_any_host": true, 00:12:38.084 "hosts": [], 00:12:38.084 "serial_number": "SPDK00000000000004", 00:12:38.084 "model_number": "SPDK bdev Controller", 00:12:38.084 "max_namespaces": 32, 00:12:38.084 "min_cntlid": 1, 00:12:38.084 "max_cntlid": 65519, 00:12:38.084 "namespaces": [ 00:12:38.084 { 00:12:38.084 "nsid": 1, 00:12:38.084 "bdev_name": "Null4", 00:12:38.084 "name": "Null4", 00:12:38.084 "nguid": "6238006EF9B84A87943CB82272C13222", 00:12:38.084 "uuid": "6238006e-f9b8-4a87-943c-b82272c13222" 00:12:38.084 } 00:12:38.084 ] 00:12:38.084 } 00:12:38.084 ] 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.084 07:22:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:38.084 rmmod nvme_tcp 00:12:38.084 rmmod nvme_fabrics 00:12:38.084 rmmod nvme_keyring 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 653672 ']' 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 653672 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 653672 ']' 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 653672 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.084 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 653672 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 653672' 00:12:38.343 killing process with pid 653672 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 653672 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 653672 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.343 07:22:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:40.877 00:12:40.877 real 0m8.988s 00:12:40.877 user 0m5.572s 00:12:40.877 sys 0m4.547s 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:40.877 ************************************ 00:12:40.877 END TEST nvmf_target_discovery 00:12:40.877 ************************************ 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:40.877 ************************************ 00:12:40.877 START TEST nvmf_referrals 00:12:40.877 ************************************ 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:40.877 * Looking for test storage... 00:12:40.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:40.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.877 --rc genhtml_branch_coverage=1 00:12:40.877 --rc genhtml_function_coverage=1 00:12:40.877 --rc genhtml_legend=1 00:12:40.877 --rc geninfo_all_blocks=1 00:12:40.877 --rc geninfo_unexecuted_blocks=1 00:12:40.877 00:12:40.877 ' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:40.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.877 --rc genhtml_branch_coverage=1 00:12:40.877 --rc genhtml_function_coverage=1 00:12:40.877 --rc genhtml_legend=1 00:12:40.877 --rc geninfo_all_blocks=1 00:12:40.877 --rc geninfo_unexecuted_blocks=1 00:12:40.877 00:12:40.877 ' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:40.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.877 --rc genhtml_branch_coverage=1 00:12:40.877 --rc genhtml_function_coverage=1 00:12:40.877 --rc genhtml_legend=1 00:12:40.877 --rc geninfo_all_blocks=1 00:12:40.877 --rc geninfo_unexecuted_blocks=1 00:12:40.877 00:12:40.877 ' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:40.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.877 --rc genhtml_branch_coverage=1 00:12:40.877 --rc genhtml_function_coverage=1 00:12:40.877 --rc genhtml_legend=1 00:12:40.877 --rc geninfo_all_blocks=1 00:12:40.877 --rc geninfo_unexecuted_blocks=1 00:12:40.877 00:12:40.877 ' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:40.877 07:22:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:46.161 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:46.161 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.161 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:46.161 Found net devices under 0000:86:00.0: cvl_0_0 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:46.162 Found net devices under 0000:86:00.1: cvl_0_1 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.162 07:22:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:46.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:12:46.162 00:12:46.162 --- 10.0.0.2 ping statistics --- 00:12:46.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.162 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:12:46.162 00:12:46.162 --- 10.0.0.1 ping statistics --- 00:12:46.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.162 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=657432 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 657432 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 657432 ']' 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.162 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.420 [2024-11-26 07:22:14.284819] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:12:46.420 [2024-11-26 07:22:14.284864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.420 [2024-11-26 07:22:14.351077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.420 [2024-11-26 07:22:14.391571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.420 [2024-11-26 07:22:14.391614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.420 [2024-11-26 07:22:14.391622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.420 [2024-11-26 07:22:14.391628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.420 [2024-11-26 07:22:14.391633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.420 [2024-11-26 07:22:14.393068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.420 [2024-11-26 07:22:14.393164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.420 [2024-11-26 07:22:14.393229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.420 [2024-11-26 07:22:14.393231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.420 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.420 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:46.420 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.420 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.420 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.678 [2024-11-26 07:22:14.538230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.678 [2024-11-26 07:22:14.551669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:46.678 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:46.937 07:22:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.195 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.453 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:47.453 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:47.453 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:47.453 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:47.453 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:47.453 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.453 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.710 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:47.711 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.711 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:47.711 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.711 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.968 07:22:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:48.226 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:48.226 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:48.226 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:48.226 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:48.226 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.226 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:48.226 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:48.226 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:48.226 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.483 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.484 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:48.742 rmmod nvme_tcp 00:12:48.742 rmmod nvme_fabrics 00:12:48.742 rmmod nvme_keyring 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 657432 ']' 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 657432 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 657432 ']' 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 657432 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 657432 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 657432' 00:12:48.742 killing process with pid 657432 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 657432 00:12:48.742 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 657432 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.001 07:22:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.906 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.906 00:12:50.906 real 0m10.463s 00:12:50.906 user 0m12.298s 00:12:50.906 sys 0m4.902s 00:12:50.906 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.906 07:22:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:50.906 ************************************ 00:12:50.906 END TEST nvmf_referrals 00:12:50.906 ************************************ 00:12:50.906 07:22:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:50.906 07:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.906 07:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.906 07:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.165 ************************************ 00:12:51.165 START TEST nvmf_connect_disconnect 00:12:51.165 ************************************ 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:51.165 * Looking for test storage... 00:12:51.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:51.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.165 --rc genhtml_branch_coverage=1 00:12:51.165 --rc genhtml_function_coverage=1 00:12:51.165 --rc genhtml_legend=1 00:12:51.165 --rc geninfo_all_blocks=1 00:12:51.165 --rc geninfo_unexecuted_blocks=1 00:12:51.165 00:12:51.165 ' 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:51.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.165 --rc genhtml_branch_coverage=1 00:12:51.165 --rc genhtml_function_coverage=1 00:12:51.165 --rc genhtml_legend=1 00:12:51.165 --rc geninfo_all_blocks=1 00:12:51.165 --rc geninfo_unexecuted_blocks=1 00:12:51.165 00:12:51.165 ' 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:51.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.165 --rc genhtml_branch_coverage=1 00:12:51.165 --rc genhtml_function_coverage=1 00:12:51.165 --rc genhtml_legend=1 00:12:51.165 --rc geninfo_all_blocks=1 00:12:51.165 --rc geninfo_unexecuted_blocks=1 00:12:51.165 00:12:51.165 ' 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:51.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.165 --rc genhtml_branch_coverage=1 00:12:51.165 --rc genhtml_function_coverage=1 00:12:51.165 --rc genhtml_legend=1 00:12:51.165 --rc geninfo_all_blocks=1 00:12:51.165 --rc geninfo_unexecuted_blocks=1 00:12:51.165 00:12:51.165 ' 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.165 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:51.166 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:56.430 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:56.430 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:56.430 Found net devices under 0000:86:00.0: cvl_0_0 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.430 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:56.431 Found net devices under 0000:86:00.1: cvl_0_1 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.431 07:22:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:56.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:12:56.431 00:12:56.431 --- 10.0.0.2 ping statistics --- 00:12:56.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.431 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:12:56.431 00:12:56.431 --- 10.0.0.1 ping statistics --- 00:12:56.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.431 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=661303 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 661303 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 661303 ']' 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.431 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.431 [2024-11-26 07:22:24.107555] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:12:56.431 [2024-11-26 07:22:24.107602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.431 [2024-11-26 07:22:24.174271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.431 [2024-11-26 07:22:24.217361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.431 [2024-11-26 07:22:24.217402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.431 [2024-11-26 07:22:24.217409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.432 [2024-11-26 07:22:24.217415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.432 [2024-11-26 07:22:24.217423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.432 [2024-11-26 07:22:24.218984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.432 [2024-11-26 07:22:24.219081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.432 [2024-11-26 07:22:24.219165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.432 [2024-11-26 07:22:24.219167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.432 [2024-11-26 07:22:24.355555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:56.432 [2024-11-26 07:22:24.414926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:56.432 07:22:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:59.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.823 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:12.823 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:12.823 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:12.823 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:12.823 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:12.823 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:12.824 rmmod nvme_tcp 00:13:12.824 rmmod nvme_fabrics 00:13:12.824 rmmod nvme_keyring 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 661303 ']' 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 661303 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 661303 ']' 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 661303 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.824 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661303 00:13:13.082 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.082 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.082 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661303' 00:13:13.082 killing process with pid 661303 00:13:13.082 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 661303 00:13:13.082 07:22:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 661303 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.082 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:15.614 00:13:15.614 real 0m24.175s 00:13:15.614 user 1m8.245s 00:13:15.614 sys 0m5.001s 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.614 ************************************ 00:13:15.614 END TEST nvmf_connect_disconnect 00:13:15.614 ************************************ 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.614 ************************************ 00:13:15.614 START TEST nvmf_multitarget 00:13:15.614 ************************************ 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:15.614 * Looking for test storage... 00:13:15.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:15.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.614 --rc genhtml_branch_coverage=1 00:13:15.614 --rc genhtml_function_coverage=1 00:13:15.614 --rc genhtml_legend=1 00:13:15.614 --rc geninfo_all_blocks=1 00:13:15.614 --rc geninfo_unexecuted_blocks=1 00:13:15.614 00:13:15.614 ' 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:15.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.614 --rc genhtml_branch_coverage=1 00:13:15.614 --rc genhtml_function_coverage=1 00:13:15.614 --rc genhtml_legend=1 00:13:15.614 --rc geninfo_all_blocks=1 00:13:15.614 --rc geninfo_unexecuted_blocks=1 00:13:15.614 00:13:15.614 ' 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:15.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.614 --rc genhtml_branch_coverage=1 00:13:15.614 --rc genhtml_function_coverage=1 00:13:15.614 --rc genhtml_legend=1 00:13:15.614 --rc geninfo_all_blocks=1 00:13:15.614 --rc geninfo_unexecuted_blocks=1 00:13:15.614 00:13:15.614 ' 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:15.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.614 --rc genhtml_branch_coverage=1 00:13:15.614 --rc genhtml_function_coverage=1 00:13:15.614 --rc genhtml_legend=1 00:13:15.614 --rc geninfo_all_blocks=1 00:13:15.614 --rc geninfo_unexecuted_blocks=1 00:13:15.614 00:13:15.614 ' 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.614 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:15.615 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:20.903 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:20.904 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:20.904 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:20.904 Found net devices under 0000:86:00.0: cvl_0_0 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:20.904 Found net devices under 0000:86:00.1: cvl_0_1 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:20.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:13:20.904 00:13:20.904 --- 10.0.0.2 ping statistics --- 00:13:20.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.904 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:13:20.904 00:13:20.904 --- 10.0.0.1 ping statistics --- 00:13:20.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.904 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:20.904 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=667473 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 667473 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 667473 ']' 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.905 [2024-11-26 07:22:48.621098] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:13:20.905 [2024-11-26 07:22:48.621144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.905 [2024-11-26 07:22:48.685936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.905 [2024-11-26 07:22:48.728732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.905 [2024-11-26 07:22:48.728769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.905 [2024-11-26 07:22:48.728776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.905 [2024-11-26 07:22:48.728782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.905 [2024-11-26 07:22:48.728787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.905 [2024-11-26 07:22:48.730242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.905 [2024-11-26 07:22:48.730339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.905 [2024-11-26 07:22:48.730424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.905 [2024-11-26 07:22:48.730426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:20.905 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:21.161 "nvmf_tgt_1" 00:13:21.161 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:21.161 "nvmf_tgt_2" 00:13:21.161 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:21.161 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:21.418 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:21.418 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:21.418 true 00:13:21.418 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:21.675 true 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:21.675 rmmod nvme_tcp 00:13:21.675 rmmod nvme_fabrics 00:13:21.675 rmmod nvme_keyring 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 667473 ']' 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 667473 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 667473 ']' 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 667473 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 667473 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 667473' 00:13:21.675 killing process with pid 667473 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 667473 00:13:21.675 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 667473 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.932 07:22:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.464 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:24.464 00:13:24.464 real 0m8.720s 00:13:24.464 user 0m6.910s 00:13:24.464 sys 0m4.209s 00:13:24.464 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.464 07:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:24.464 ************************************ 00:13:24.464 END TEST nvmf_multitarget 00:13:24.464 ************************************ 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.464 ************************************ 00:13:24.464 START TEST nvmf_rpc 00:13:24.464 ************************************ 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:24.464 * Looking for test storage... 00:13:24.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.464 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:24.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.464 --rc genhtml_branch_coverage=1 00:13:24.464 --rc genhtml_function_coverage=1 00:13:24.465 --rc genhtml_legend=1 00:13:24.465 --rc geninfo_all_blocks=1 00:13:24.465 --rc geninfo_unexecuted_blocks=1 00:13:24.465 00:13:24.465 ' 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:24.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.465 --rc genhtml_branch_coverage=1 00:13:24.465 --rc genhtml_function_coverage=1 00:13:24.465 --rc genhtml_legend=1 00:13:24.465 --rc geninfo_all_blocks=1 00:13:24.465 --rc geninfo_unexecuted_blocks=1 00:13:24.465 00:13:24.465 ' 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:24.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.465 --rc genhtml_branch_coverage=1 00:13:24.465 --rc genhtml_function_coverage=1 00:13:24.465 --rc genhtml_legend=1 00:13:24.465 --rc geninfo_all_blocks=1 00:13:24.465 --rc geninfo_unexecuted_blocks=1 00:13:24.465 00:13:24.465 ' 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:24.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.465 --rc genhtml_branch_coverage=1 00:13:24.465 --rc genhtml_function_coverage=1 00:13:24.465 --rc genhtml_legend=1 00:13:24.465 --rc geninfo_all_blocks=1 00:13:24.465 --rc geninfo_unexecuted_blocks=1 00:13:24.465 00:13:24.465 ' 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:24.465 07:22:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:29.734 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:29.734 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:29.734 Found net devices under 0000:86:00.0: cvl_0_0 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:29.734 Found net devices under 0000:86:00.1: cvl_0_1 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.734 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:29.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:13:29.735 00:13:29.735 --- 10.0.0.2 ping statistics --- 00:13:29.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.735 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:13:29.735 00:13:29.735 --- 10.0.0.1 ping statistics --- 00:13:29.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.735 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=671236 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 671236 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 671236 ']' 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.735 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.735 [2024-11-26 07:22:57.828253] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:13:29.735 [2024-11-26 07:22:57.828304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.080 [2024-11-26 07:22:57.891095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.080 [2024-11-26 07:22:57.935858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.080 [2024-11-26 07:22:57.935892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.080 [2024-11-26 07:22:57.935900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.080 [2024-11-26 07:22:57.935905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.080 [2024-11-26 07:22:57.935911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.080 [2024-11-26 07:22:57.937451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.080 [2024-11-26 07:22:57.937472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.080 [2024-11-26 07:22:57.937567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.080 [2024-11-26 07:22:57.937568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:30.080 "tick_rate": 2300000000, 00:13:30.080 "poll_groups": [ 00:13:30.080 { 00:13:30.080 "name": "nvmf_tgt_poll_group_000", 00:13:30.080 "admin_qpairs": 0, 00:13:30.080 "io_qpairs": 0, 00:13:30.080 "current_admin_qpairs": 0, 00:13:30.080 "current_io_qpairs": 0, 00:13:30.080 "pending_bdev_io": 0, 00:13:30.080 "completed_nvme_io": 0, 00:13:30.080 "transports": [] 00:13:30.080 }, 00:13:30.080 { 00:13:30.080 "name": "nvmf_tgt_poll_group_001", 00:13:30.080 "admin_qpairs": 0, 00:13:30.080 "io_qpairs": 0, 00:13:30.080 "current_admin_qpairs": 0, 00:13:30.080 "current_io_qpairs": 0, 00:13:30.080 "pending_bdev_io": 0, 00:13:30.080 "completed_nvme_io": 0, 00:13:30.080 "transports": [] 00:13:30.080 }, 00:13:30.080 { 00:13:30.080 "name": "nvmf_tgt_poll_group_002", 00:13:30.080 "admin_qpairs": 0, 00:13:30.080 "io_qpairs": 0, 00:13:30.080 "current_admin_qpairs": 0, 00:13:30.080 "current_io_qpairs": 0, 00:13:30.080 "pending_bdev_io": 0, 00:13:30.080 "completed_nvme_io": 0, 00:13:30.080 "transports": [] 00:13:30.080 }, 00:13:30.080 { 00:13:30.080 "name": "nvmf_tgt_poll_group_003", 00:13:30.080 "admin_qpairs": 0, 00:13:30.080 "io_qpairs": 0, 00:13:30.080 "current_admin_qpairs": 0, 00:13:30.080 "current_io_qpairs": 0, 00:13:30.080 "pending_bdev_io": 0, 00:13:30.080 "completed_nvme_io": 0, 00:13:30.080 "transports": [] 00:13:30.080 } 00:13:30.080 ] 00:13:30.080 }' 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:30.080 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:30.339 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:30.339 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.339 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.339 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.339 [2024-11-26 07:22:58.191315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.339 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.339 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:30.340 "tick_rate": 2300000000, 00:13:30.340 "poll_groups": [ 00:13:30.340 { 00:13:30.340 "name": "nvmf_tgt_poll_group_000", 00:13:30.340 "admin_qpairs": 0, 00:13:30.340 "io_qpairs": 0, 00:13:30.340 "current_admin_qpairs": 0, 00:13:30.340 "current_io_qpairs": 0, 00:13:30.340 "pending_bdev_io": 0, 00:13:30.340 "completed_nvme_io": 0, 00:13:30.340 "transports": [ 00:13:30.340 { 00:13:30.340 "trtype": "TCP" 00:13:30.340 } 00:13:30.340 ] 00:13:30.340 }, 00:13:30.340 { 00:13:30.340 "name": "nvmf_tgt_poll_group_001", 00:13:30.340 "admin_qpairs": 0, 00:13:30.340 "io_qpairs": 0, 00:13:30.340 "current_admin_qpairs": 0, 00:13:30.340 "current_io_qpairs": 0, 00:13:30.340 "pending_bdev_io": 0, 00:13:30.340 "completed_nvme_io": 0, 00:13:30.340 "transports": [ 00:13:30.340 { 00:13:30.340 "trtype": "TCP" 00:13:30.340 } 00:13:30.340 ] 00:13:30.340 }, 00:13:30.340 { 00:13:30.340 "name": "nvmf_tgt_poll_group_002", 00:13:30.340 "admin_qpairs": 0, 00:13:30.340 "io_qpairs": 0, 00:13:30.340 "current_admin_qpairs": 0, 00:13:30.340 "current_io_qpairs": 0, 00:13:30.340 "pending_bdev_io": 0, 00:13:30.340 "completed_nvme_io": 0, 00:13:30.340 "transports": [ 00:13:30.340 { 00:13:30.340 "trtype": "TCP" 00:13:30.340 } 00:13:30.340 ] 00:13:30.340 }, 00:13:30.340 { 00:13:30.340 "name": "nvmf_tgt_poll_group_003", 00:13:30.340 "admin_qpairs": 0, 00:13:30.340 "io_qpairs": 0, 00:13:30.340 "current_admin_qpairs": 0, 00:13:30.340 "current_io_qpairs": 0, 00:13:30.340 "pending_bdev_io": 0, 00:13:30.340 "completed_nvme_io": 0, 00:13:30.340 "transports": [ 00:13:30.340 { 00:13:30.340 "trtype": "TCP" 00:13:30.340 } 00:13:30.340 ] 00:13:30.340 } 00:13:30.340 ] 00:13:30.340 }' 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.340 Malloc1 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.340 [2024-11-26 07:22:58.378119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:30.340 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:30.340 [2024-11-26 07:22:58.406691] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:13:30.341 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:30.341 could not add new controller: failed to write to nvme-fabrics device 00:13:30.341 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:30.341 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.341 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.341 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.341 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:30.341 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.341 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.599 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.599 07:22:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.535 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.535 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:31.535 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.535 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:31.535 07:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.066 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.067 [2024-11-26 07:23:01.690946] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:13:34.067 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:34.067 could not add new controller: failed to write to nvme-fabrics device 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.067 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.002 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.002 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:35.002 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.002 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:35.002 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.906 07:23:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.266 [2024-11-26 07:23:05.023339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.266 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.309 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.309 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:38.309 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.309 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:38.309 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.382 [2024-11-26 07:23:08.360166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.382 07:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.809 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:41.809 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:41.809 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.809 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:41.809 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.711 [2024-11-26 07:23:11.768723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.711 07:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.086 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.086 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:45.086 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.086 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:45.086 07:23:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:46.983 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:46.983 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:46.983 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.983 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:46.983 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.983 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:46.983 07:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:46.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.983 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.983 [2024-11-26 07:23:15.068885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.984 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.984 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:46.984 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.984 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.241 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.241 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.241 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.241 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.241 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.241 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:48.174 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:48.174 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:48.174 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.174 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:48.174 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 [2024-11-26 07:23:18.476259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.700 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:51.631 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:51.631 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:51.632 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.632 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:51.632 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:53.531 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:53.531 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:53.531 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.531 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:53.531 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.531 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:53.531 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:53.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 [2024-11-26 07:23:21.757826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 [2024-11-26 07:23:21.805934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:53.790 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.791 [2024-11-26 07:23:21.854082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.791 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 [2024-11-26 07:23:21.902245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 [2024-11-26 07:23:21.950425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.050 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:54.050 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.050 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:54.050 "tick_rate": 2300000000, 00:13:54.050 "poll_groups": [ 00:13:54.050 { 00:13:54.050 "name": "nvmf_tgt_poll_group_000", 00:13:54.050 "admin_qpairs": 2, 00:13:54.050 "io_qpairs": 168, 00:13:54.050 "current_admin_qpairs": 0, 00:13:54.050 "current_io_qpairs": 0, 00:13:54.050 "pending_bdev_io": 0, 00:13:54.050 "completed_nvme_io": 311, 00:13:54.050 "transports": [ 00:13:54.050 { 00:13:54.050 "trtype": "TCP" 00:13:54.050 } 00:13:54.050 ] 00:13:54.050 }, 00:13:54.050 { 00:13:54.050 "name": "nvmf_tgt_poll_group_001", 00:13:54.050 "admin_qpairs": 2, 00:13:54.050 "io_qpairs": 168, 00:13:54.050 "current_admin_qpairs": 0, 00:13:54.050 "current_io_qpairs": 0, 00:13:54.050 "pending_bdev_io": 0, 00:13:54.050 "completed_nvme_io": 224, 00:13:54.050 "transports": [ 00:13:54.050 { 00:13:54.050 "trtype": "TCP" 00:13:54.050 } 00:13:54.050 ] 00:13:54.050 }, 00:13:54.050 { 00:13:54.050 "name": "nvmf_tgt_poll_group_002", 00:13:54.050 "admin_qpairs": 1, 00:13:54.050 "io_qpairs": 168, 00:13:54.050 "current_admin_qpairs": 0, 00:13:54.050 "current_io_qpairs": 0, 00:13:54.050 "pending_bdev_io": 0, 00:13:54.050 "completed_nvme_io": 220, 00:13:54.050 "transports": [ 00:13:54.050 { 00:13:54.050 "trtype": "TCP" 00:13:54.050 } 00:13:54.050 ] 00:13:54.050 }, 00:13:54.050 { 00:13:54.050 "name": "nvmf_tgt_poll_group_003", 00:13:54.050 "admin_qpairs": 2, 00:13:54.050 "io_qpairs": 168, 00:13:54.050 "current_admin_qpairs": 0, 00:13:54.050 "current_io_qpairs": 0, 00:13:54.050 "pending_bdev_io": 0, 00:13:54.050 "completed_nvme_io": 267, 00:13:54.050 "transports": [ 00:13:54.050 { 00:13:54.050 "trtype": "TCP" 00:13:54.050 } 00:13:54.050 ] 00:13:54.050 } 00:13:54.050 ] 00:13:54.050 }' 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:54.050 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:54.050 rmmod nvme_tcp 00:13:54.050 rmmod nvme_fabrics 00:13:54.050 rmmod nvme_keyring 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 671236 ']' 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 671236 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 671236 ']' 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 671236 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 671236 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 671236' 00:13:54.310 killing process with pid 671236 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 671236 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 671236 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.310 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:56.846 00:13:56.846 real 0m32.411s 00:13:56.846 user 1m38.971s 00:13:56.846 sys 0m6.211s 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.846 ************************************ 00:13:56.846 END TEST nvmf_rpc 00:13:56.846 ************************************ 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.846 ************************************ 00:13:56.846 START TEST nvmf_invalid 00:13:56.846 ************************************ 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:56.846 * Looking for test storage... 00:13:56.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:56.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.846 --rc genhtml_branch_coverage=1 00:13:56.846 --rc genhtml_function_coverage=1 00:13:56.846 --rc genhtml_legend=1 00:13:56.846 --rc geninfo_all_blocks=1 00:13:56.846 --rc geninfo_unexecuted_blocks=1 00:13:56.846 00:13:56.846 ' 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:56.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.846 --rc genhtml_branch_coverage=1 00:13:56.846 --rc genhtml_function_coverage=1 00:13:56.846 --rc genhtml_legend=1 00:13:56.846 --rc geninfo_all_blocks=1 00:13:56.846 --rc geninfo_unexecuted_blocks=1 00:13:56.846 00:13:56.846 ' 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:56.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.846 --rc genhtml_branch_coverage=1 00:13:56.846 --rc genhtml_function_coverage=1 00:13:56.846 --rc genhtml_legend=1 00:13:56.846 --rc geninfo_all_blocks=1 00:13:56.846 --rc geninfo_unexecuted_blocks=1 00:13:56.846 00:13:56.846 ' 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:56.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.846 --rc genhtml_branch_coverage=1 00:13:56.846 --rc genhtml_function_coverage=1 00:13:56.846 --rc genhtml_legend=1 00:13:56.846 --rc geninfo_all_blocks=1 00:13:56.846 --rc geninfo_unexecuted_blocks=1 00:13:56.846 00:13:56.846 ' 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.846 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:56.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:56.847 07:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:02.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:02.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.119 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:02.120 Found net devices under 0000:86:00.0: cvl_0_0 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:02.120 Found net devices under 0000:86:00.1: cvl_0_1 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.120 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:02.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:14:02.379 00:14:02.379 --- 10.0.0.2 ping statistics --- 00:14:02.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.379 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:14:02.379 00:14:02.379 --- 10.0.0.1 ping statistics --- 00:14:02.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.379 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.379 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=678882 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 678882 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 678882 ']' 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.638 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.638 [2024-11-26 07:23:30.534118] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:14:02.638 [2024-11-26 07:23:30.534163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.638 [2024-11-26 07:23:30.600630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.638 [2024-11-26 07:23:30.641900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.638 [2024-11-26 07:23:30.641941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.638 [2024-11-26 07:23:30.641953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.638 [2024-11-26 07:23:30.641959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.638 [2024-11-26 07:23:30.641965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.638 [2024-11-26 07:23:30.643400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.638 [2024-11-26 07:23:30.643500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.638 [2024-11-26 07:23:30.643573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.638 [2024-11-26 07:23:30.643575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.896 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.896 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:02.896 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.896 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:02.896 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.896 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.896 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:02.896 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26581 00:14:02.896 [2024-11-26 07:23:30.968962] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:03.154 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:03.154 { 00:14:03.154 "nqn": "nqn.2016-06.io.spdk:cnode26581", 00:14:03.154 "tgt_name": "foobar", 00:14:03.154 "method": "nvmf_create_subsystem", 00:14:03.154 "req_id": 1 00:14:03.154 } 00:14:03.154 Got JSON-RPC error response 00:14:03.154 response: 00:14:03.154 { 00:14:03.154 "code": -32603, 00:14:03.154 "message": "Unable to find target foobar" 00:14:03.154 }' 00:14:03.154 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:03.154 { 00:14:03.154 "nqn": "nqn.2016-06.io.spdk:cnode26581", 00:14:03.154 "tgt_name": "foobar", 00:14:03.154 "method": "nvmf_create_subsystem", 00:14:03.154 "req_id": 1 00:14:03.154 } 00:14:03.154 Got JSON-RPC error response 00:14:03.154 response: 00:14:03.154 { 00:14:03.154 "code": -32603, 00:14:03.154 "message": "Unable to find target foobar" 00:14:03.154 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:03.154 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:03.154 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4585 00:14:03.154 [2024-11-26 07:23:31.181679] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4585: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:03.154 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:03.154 { 00:14:03.154 "nqn": "nqn.2016-06.io.spdk:cnode4585", 00:14:03.154 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:03.154 "method": "nvmf_create_subsystem", 00:14:03.154 "req_id": 1 00:14:03.154 } 00:14:03.154 Got JSON-RPC error response 00:14:03.154 response: 00:14:03.154 { 00:14:03.154 "code": -32602, 00:14:03.154 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:03.154 }' 00:14:03.154 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:03.154 { 00:14:03.154 "nqn": "nqn.2016-06.io.spdk:cnode4585", 00:14:03.154 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:03.154 "method": "nvmf_create_subsystem", 00:14:03.154 "req_id": 1 00:14:03.154 } 00:14:03.154 Got JSON-RPC error response 00:14:03.154 response: 00:14:03.154 { 00:14:03.154 "code": -32602, 00:14:03.154 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:03.154 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:03.154 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:03.154 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27753 00:14:03.413 [2024-11-26 07:23:31.402364] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27753: invalid model number 'SPDK_Controller' 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:03.413 { 00:14:03.413 "nqn": "nqn.2016-06.io.spdk:cnode27753", 00:14:03.413 "model_number": "SPDK_Controller\u001f", 00:14:03.413 "method": "nvmf_create_subsystem", 00:14:03.413 "req_id": 1 00:14:03.413 } 00:14:03.413 Got JSON-RPC error response 00:14:03.413 response: 00:14:03.413 { 00:14:03.413 "code": -32602, 00:14:03.413 "message": "Invalid MN SPDK_Controller\u001f" 00:14:03.413 }' 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:03.413 { 00:14:03.413 "nqn": "nqn.2016-06.io.spdk:cnode27753", 00:14:03.413 "model_number": "SPDK_Controller\u001f", 00:14:03.413 "method": "nvmf_create_subsystem", 00:14:03.413 "req_id": 1 00:14:03.413 } 00:14:03.413 Got JSON-RPC error response 00:14:03.413 response: 00:14:03.413 { 00:14:03.413 "code": -32602, 00:14:03.413 "message": "Invalid MN SPDK_Controller\u001f" 00:14:03.413 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:03.413 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.414 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'T/^^hgn@V(gqC|50~9P*' 00:14:03.673 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'T/^^hgn@V(gqC|50~9P*' nqn.2016-06.io.spdk:cnode20254 00:14:03.673 [2024-11-26 07:23:31.755599] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20254: invalid serial number 'T/^^hgn@V(gqC|50~9P*' 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:03.932 { 00:14:03.932 "nqn": "nqn.2016-06.io.spdk:cnode20254", 00:14:03.932 "serial_number": "T/^^hgn@V(gqC|50~\u007f9P*", 00:14:03.932 "method": "nvmf_create_subsystem", 00:14:03.932 "req_id": 1 00:14:03.932 } 00:14:03.932 Got JSON-RPC error response 00:14:03.932 response: 00:14:03.932 { 00:14:03.932 "code": -32602, 00:14:03.932 "message": "Invalid SN T/^^hgn@V(gqC|50~\u007f9P*" 00:14:03.932 }' 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:03.932 { 00:14:03.932 "nqn": "nqn.2016-06.io.spdk:cnode20254", 00:14:03.932 "serial_number": "T/^^hgn@V(gqC|50~\u007f9P*", 00:14:03.932 "method": "nvmf_create_subsystem", 00:14:03.932 "req_id": 1 00:14:03.932 } 00:14:03.932 Got JSON-RPC error response 00:14:03.932 response: 00:14:03.932 { 00:14:03.932 "code": -32602, 00:14:03.932 "message": "Invalid SN T/^^hgn@V(gqC|50~\u007f9P*" 00:14:03.932 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:03.932 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:03.933 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.934 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.934 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ / == \- ]] 00:14:04.192 07:23:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '/m!AeQcb7:$AbKU2YbuA+1OE@Cs?Tq-W1FD% /dev/null' 00:14:06.269 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.803 00:14:08.803 real 0m11.819s 00:14:08.803 user 0m18.710s 00:14:08.803 sys 0m5.228s 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:08.803 ************************************ 00:14:08.803 END TEST nvmf_invalid 00:14:08.803 ************************************ 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.803 ************************************ 00:14:08.803 START TEST nvmf_connect_stress 00:14:08.803 ************************************ 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:08.803 * Looking for test storage... 00:14:08.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:08.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.803 --rc genhtml_branch_coverage=1 00:14:08.803 --rc genhtml_function_coverage=1 00:14:08.803 --rc genhtml_legend=1 00:14:08.803 --rc geninfo_all_blocks=1 00:14:08.803 --rc geninfo_unexecuted_blocks=1 00:14:08.803 00:14:08.803 ' 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:08.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.803 --rc genhtml_branch_coverage=1 00:14:08.803 --rc genhtml_function_coverage=1 00:14:08.803 --rc genhtml_legend=1 00:14:08.803 --rc geninfo_all_blocks=1 00:14:08.803 --rc geninfo_unexecuted_blocks=1 00:14:08.803 00:14:08.803 ' 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:08.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.803 --rc genhtml_branch_coverage=1 00:14:08.803 --rc genhtml_function_coverage=1 00:14:08.803 --rc genhtml_legend=1 00:14:08.803 --rc geninfo_all_blocks=1 00:14:08.803 --rc geninfo_unexecuted_blocks=1 00:14:08.803 00:14:08.803 ' 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:08.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.803 --rc genhtml_branch_coverage=1 00:14:08.803 --rc genhtml_function_coverage=1 00:14:08.803 --rc genhtml_legend=1 00:14:08.803 --rc geninfo_all_blocks=1 00:14:08.803 --rc geninfo_unexecuted_blocks=1 00:14:08.803 00:14:08.803 ' 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.803 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:08.804 07:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:14.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:14.071 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:14.071 Found net devices under 0000:86:00.0: cvl_0_0 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:14.071 Found net devices under 0000:86:00.1: cvl_0_1 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.071 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:14.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:14:14.072 00:14:14.072 --- 10.0.0.2 ping statistics --- 00:14:14.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.072 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:14:14.072 00:14:14.072 --- 10.0.0.1 ping statistics --- 00:14:14.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.072 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=683041 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 683041 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 683041 ']' 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:14.072 [2024-11-26 07:23:41.512579] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:14:14.072 [2024-11-26 07:23:41.512623] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.072 [2024-11-26 07:23:41.579045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:14.072 [2024-11-26 07:23:41.621377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.072 [2024-11-26 07:23:41.621415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.072 [2024-11-26 07:23:41.621423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.072 [2024-11-26 07:23:41.621429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.072 [2024-11-26 07:23:41.621434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.072 [2024-11-26 07:23:41.622812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.072 [2024-11-26 07:23:41.622901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.072 [2024-11-26 07:23:41.622903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 [2024-11-26 07:23:41.758865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 [2024-11-26 07:23:41.783120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 NULL1 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=683066 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.072 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.073 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.330 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.330 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:14.330 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.330 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.330 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.588 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.588 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:14.588 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.588 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.588 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.845 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.845 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:14.845 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.845 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.845 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.103 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.103 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:15.103 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.103 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.103 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.700 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.700 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:15.700 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.700 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.700 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.958 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.958 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:15.958 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.958 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.958 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.217 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.217 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:16.217 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.217 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.217 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.475 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.475 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:16.475 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.475 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.475 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.733 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.733 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:16.733 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.733 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.733 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.322 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.322 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:17.322 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.322 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.322 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.581 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.581 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:17.581 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.581 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.581 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.839 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.839 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:17.839 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.839 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.839 07:23:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.097 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.097 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:18.097 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.097 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.097 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.354 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.354 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:18.354 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.354 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.354 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.919 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.919 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:18.919 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.919 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.919 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.177 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.177 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:19.177 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.177 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.177 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.435 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.435 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:19.435 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.435 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.435 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.693 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.693 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:19.693 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.693 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.693 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.258 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.258 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:20.258 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.258 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.258 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.516 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.516 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:20.516 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.516 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.516 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.774 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.774 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:20.774 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.774 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.774 07:23:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.032 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.032 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:21.032 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.032 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.032 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.289 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.289 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:21.289 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.289 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.289 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.853 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.853 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:21.853 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.853 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.853 07:23:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.111 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.111 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:22.111 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.111 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.111 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.368 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.368 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:22.368 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.368 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.368 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.626 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.626 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:22.626 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.626 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.626 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.192 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.192 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:23.192 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.192 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.192 07:23:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.449 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.449 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:23.450 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.450 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.450 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.707 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.707 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:23.707 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.707 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.707 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.965 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 683066 00:14:23.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (683066) - No such process 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 683066 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:23.965 07:23:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:23.965 rmmod nvme_tcp 00:14:23.965 rmmod nvme_fabrics 00:14:23.965 rmmod nvme_keyring 00:14:23.965 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:23.965 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:23.965 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:23.965 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 683041 ']' 00:14:23.965 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 683041 00:14:23.965 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 683041 ']' 00:14:23.965 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 683041 00:14:23.965 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:23.965 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.965 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 683041 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 683041' 00:14:24.224 killing process with pid 683041 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 683041 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 683041 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.224 07:23:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:26.759 00:14:26.759 real 0m17.908s 00:14:26.759 user 0m38.969s 00:14:26.759 sys 0m7.679s 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.759 ************************************ 00:14:26.759 END TEST nvmf_connect_stress 00:14:26.759 ************************************ 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:26.759 ************************************ 00:14:26.759 START TEST nvmf_fused_ordering 00:14:26.759 ************************************ 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:26.759 * Looking for test storage... 00:14:26.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:26.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.759 --rc genhtml_branch_coverage=1 00:14:26.759 --rc genhtml_function_coverage=1 00:14:26.759 --rc genhtml_legend=1 00:14:26.759 --rc geninfo_all_blocks=1 00:14:26.759 --rc geninfo_unexecuted_blocks=1 00:14:26.759 00:14:26.759 ' 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:26.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.759 --rc genhtml_branch_coverage=1 00:14:26.759 --rc genhtml_function_coverage=1 00:14:26.759 --rc genhtml_legend=1 00:14:26.759 --rc geninfo_all_blocks=1 00:14:26.759 --rc geninfo_unexecuted_blocks=1 00:14:26.759 00:14:26.759 ' 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:26.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.759 --rc genhtml_branch_coverage=1 00:14:26.759 --rc genhtml_function_coverage=1 00:14:26.759 --rc genhtml_legend=1 00:14:26.759 --rc geninfo_all_blocks=1 00:14:26.759 --rc geninfo_unexecuted_blocks=1 00:14:26.759 00:14:26.759 ' 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:26.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.759 --rc genhtml_branch_coverage=1 00:14:26.759 --rc genhtml_function_coverage=1 00:14:26.759 --rc genhtml_legend=1 00:14:26.759 --rc geninfo_all_blocks=1 00:14:26.759 --rc geninfo_unexecuted_blocks=1 00:14:26.759 00:14:26.759 ' 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.759 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:26.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:26.760 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:32.026 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:32.026 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:32.026 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:32.027 Found net devices under 0000:86:00.0: cvl_0_0 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:32.027 Found net devices under 0000:86:00.1: cvl_0_1 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.027 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:32.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:14:32.286 00:14:32.286 --- 10.0.0.2 ping statistics --- 00:14:32.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.286 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:14:32.286 00:14:32.286 --- 10.0.0.1 ping statistics --- 00:14:32.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.286 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=688249 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 688249 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 688249 ']' 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.286 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.287 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:32.287 [2024-11-26 07:24:00.327104] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:14:32.287 [2024-11-26 07:24:00.327150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.545 [2024-11-26 07:24:00.396747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.545 [2024-11-26 07:24:00.438538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.545 [2024-11-26 07:24:00.438575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.545 [2024-11-26 07:24:00.438583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.545 [2024-11-26 07:24:00.438589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.545 [2024-11-26 07:24:00.438595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.545 [2024-11-26 07:24:00.439107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.545 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.545 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:32.545 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:32.545 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:32.545 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.545 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.545 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.545 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.545 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.546 [2024-11-26 07:24:00.562705] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.546 [2024-11-26 07:24:00.578891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.546 NULL1 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.546 07:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:32.546 [2024-11-26 07:24:00.633521] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:14:32.546 [2024-11-26 07:24:00.633566] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688422 ] 00:14:33.112 Attached to nqn.2016-06.io.spdk:cnode1 00:14:33.112 Namespace ID: 1 size: 1GB 00:14:33.112 fused_ordering(0) 00:14:33.112 fused_ordering(1) 00:14:33.112 fused_ordering(2) 00:14:33.112 fused_ordering(3) 00:14:33.112 fused_ordering(4) 00:14:33.112 fused_ordering(5) 00:14:33.112 fused_ordering(6) 00:14:33.112 fused_ordering(7) 00:14:33.112 fused_ordering(8) 00:14:33.112 fused_ordering(9) 00:14:33.112 fused_ordering(10) 00:14:33.112 fused_ordering(11) 00:14:33.112 fused_ordering(12) 00:14:33.112 fused_ordering(13) 00:14:33.112 fused_ordering(14) 00:14:33.112 fused_ordering(15) 00:14:33.112 fused_ordering(16) 00:14:33.112 fused_ordering(17) 00:14:33.112 fused_ordering(18) 00:14:33.112 fused_ordering(19) 00:14:33.112 fused_ordering(20) 00:14:33.112 fused_ordering(21) 00:14:33.112 fused_ordering(22) 00:14:33.112 fused_ordering(23) 00:14:33.112 fused_ordering(24) 00:14:33.112 fused_ordering(25) 00:14:33.112 fused_ordering(26) 00:14:33.112 fused_ordering(27) 00:14:33.112 fused_ordering(28) 00:14:33.112 fused_ordering(29) 00:14:33.112 fused_ordering(30) 00:14:33.112 fused_ordering(31) 00:14:33.112 fused_ordering(32) 00:14:33.112 fused_ordering(33) 00:14:33.112 fused_ordering(34) 00:14:33.112 fused_ordering(35) 00:14:33.112 fused_ordering(36) 00:14:33.112 fused_ordering(37) 00:14:33.112 fused_ordering(38) 00:14:33.112 fused_ordering(39) 00:14:33.112 fused_ordering(40) 00:14:33.112 fused_ordering(41) 00:14:33.112 fused_ordering(42) 00:14:33.112 fused_ordering(43) 00:14:33.112 fused_ordering(44) 00:14:33.112 fused_ordering(45) 00:14:33.112 fused_ordering(46) 00:14:33.112 fused_ordering(47) 00:14:33.112 fused_ordering(48) 00:14:33.112 fused_ordering(49) 00:14:33.112 fused_ordering(50) 00:14:33.112 fused_ordering(51) 00:14:33.112 fused_ordering(52) 00:14:33.112 fused_ordering(53) 00:14:33.112 fused_ordering(54) 00:14:33.112 fused_ordering(55) 00:14:33.112 fused_ordering(56) 00:14:33.112 fused_ordering(57) 00:14:33.112 fused_ordering(58) 00:14:33.112 fused_ordering(59) 00:14:33.112 fused_ordering(60) 00:14:33.112 fused_ordering(61) 00:14:33.112 fused_ordering(62) 00:14:33.112 fused_ordering(63) 00:14:33.112 fused_ordering(64) 00:14:33.112 fused_ordering(65) 00:14:33.112 fused_ordering(66) 00:14:33.112 fused_ordering(67) 00:14:33.112 fused_ordering(68) 00:14:33.112 fused_ordering(69) 00:14:33.112 fused_ordering(70) 00:14:33.112 fused_ordering(71) 00:14:33.112 fused_ordering(72) 00:14:33.112 fused_ordering(73) 00:14:33.112 fused_ordering(74) 00:14:33.112 fused_ordering(75) 00:14:33.112 fused_ordering(76) 00:14:33.112 fused_ordering(77) 00:14:33.112 fused_ordering(78) 00:14:33.112 fused_ordering(79) 00:14:33.112 fused_ordering(80) 00:14:33.112 fused_ordering(81) 00:14:33.112 fused_ordering(82) 00:14:33.112 fused_ordering(83) 00:14:33.112 fused_ordering(84) 00:14:33.112 fused_ordering(85) 00:14:33.112 fused_ordering(86) 00:14:33.112 fused_ordering(87) 00:14:33.112 fused_ordering(88) 00:14:33.112 fused_ordering(89) 00:14:33.112 fused_ordering(90) 00:14:33.112 fused_ordering(91) 00:14:33.112 fused_ordering(92) 00:14:33.112 fused_ordering(93) 00:14:33.112 fused_ordering(94) 00:14:33.112 fused_ordering(95) 00:14:33.112 fused_ordering(96) 00:14:33.112 fused_ordering(97) 00:14:33.112 fused_ordering(98) 00:14:33.112 fused_ordering(99) 00:14:33.112 fused_ordering(100) 00:14:33.112 fused_ordering(101) 00:14:33.112 fused_ordering(102) 00:14:33.112 fused_ordering(103) 00:14:33.112 fused_ordering(104) 00:14:33.112 fused_ordering(105) 00:14:33.112 fused_ordering(106) 00:14:33.112 fused_ordering(107) 00:14:33.112 fused_ordering(108) 00:14:33.112 fused_ordering(109) 00:14:33.112 fused_ordering(110) 00:14:33.112 fused_ordering(111) 00:14:33.112 fused_ordering(112) 00:14:33.112 fused_ordering(113) 00:14:33.112 fused_ordering(114) 00:14:33.112 fused_ordering(115) 00:14:33.112 fused_ordering(116) 00:14:33.112 fused_ordering(117) 00:14:33.112 fused_ordering(118) 00:14:33.112 fused_ordering(119) 00:14:33.112 fused_ordering(120) 00:14:33.112 fused_ordering(121) 00:14:33.112 fused_ordering(122) 00:14:33.112 fused_ordering(123) 00:14:33.112 fused_ordering(124) 00:14:33.112 fused_ordering(125) 00:14:33.112 fused_ordering(126) 00:14:33.112 fused_ordering(127) 00:14:33.112 fused_ordering(128) 00:14:33.112 fused_ordering(129) 00:14:33.112 fused_ordering(130) 00:14:33.112 fused_ordering(131) 00:14:33.112 fused_ordering(132) 00:14:33.112 fused_ordering(133) 00:14:33.112 fused_ordering(134) 00:14:33.112 fused_ordering(135) 00:14:33.112 fused_ordering(136) 00:14:33.112 fused_ordering(137) 00:14:33.112 fused_ordering(138) 00:14:33.112 fused_ordering(139) 00:14:33.112 fused_ordering(140) 00:14:33.112 fused_ordering(141) 00:14:33.112 fused_ordering(142) 00:14:33.112 fused_ordering(143) 00:14:33.112 fused_ordering(144) 00:14:33.112 fused_ordering(145) 00:14:33.112 fused_ordering(146) 00:14:33.112 fused_ordering(147) 00:14:33.112 fused_ordering(148) 00:14:33.112 fused_ordering(149) 00:14:33.112 fused_ordering(150) 00:14:33.112 fused_ordering(151) 00:14:33.112 fused_ordering(152) 00:14:33.112 fused_ordering(153) 00:14:33.112 fused_ordering(154) 00:14:33.112 fused_ordering(155) 00:14:33.112 fused_ordering(156) 00:14:33.112 fused_ordering(157) 00:14:33.112 fused_ordering(158) 00:14:33.112 fused_ordering(159) 00:14:33.112 fused_ordering(160) 00:14:33.112 fused_ordering(161) 00:14:33.112 fused_ordering(162) 00:14:33.112 fused_ordering(163) 00:14:33.112 fused_ordering(164) 00:14:33.112 fused_ordering(165) 00:14:33.112 fused_ordering(166) 00:14:33.112 fused_ordering(167) 00:14:33.112 fused_ordering(168) 00:14:33.112 fused_ordering(169) 00:14:33.112 fused_ordering(170) 00:14:33.112 fused_ordering(171) 00:14:33.112 fused_ordering(172) 00:14:33.112 fused_ordering(173) 00:14:33.113 fused_ordering(174) 00:14:33.113 fused_ordering(175) 00:14:33.113 fused_ordering(176) 00:14:33.113 fused_ordering(177) 00:14:33.113 fused_ordering(178) 00:14:33.113 fused_ordering(179) 00:14:33.113 fused_ordering(180) 00:14:33.113 fused_ordering(181) 00:14:33.113 fused_ordering(182) 00:14:33.113 fused_ordering(183) 00:14:33.113 fused_ordering(184) 00:14:33.113 fused_ordering(185) 00:14:33.113 fused_ordering(186) 00:14:33.113 fused_ordering(187) 00:14:33.113 fused_ordering(188) 00:14:33.113 fused_ordering(189) 00:14:33.113 fused_ordering(190) 00:14:33.113 fused_ordering(191) 00:14:33.113 fused_ordering(192) 00:14:33.113 fused_ordering(193) 00:14:33.113 fused_ordering(194) 00:14:33.113 fused_ordering(195) 00:14:33.113 fused_ordering(196) 00:14:33.113 fused_ordering(197) 00:14:33.113 fused_ordering(198) 00:14:33.113 fused_ordering(199) 00:14:33.113 fused_ordering(200) 00:14:33.113 fused_ordering(201) 00:14:33.113 fused_ordering(202) 00:14:33.113 fused_ordering(203) 00:14:33.113 fused_ordering(204) 00:14:33.113 fused_ordering(205) 00:14:33.372 fused_ordering(206) 00:14:33.372 fused_ordering(207) 00:14:33.372 fused_ordering(208) 00:14:33.372 fused_ordering(209) 00:14:33.372 fused_ordering(210) 00:14:33.372 fused_ordering(211) 00:14:33.372 fused_ordering(212) 00:14:33.372 fused_ordering(213) 00:14:33.372 fused_ordering(214) 00:14:33.372 fused_ordering(215) 00:14:33.372 fused_ordering(216) 00:14:33.372 fused_ordering(217) 00:14:33.372 fused_ordering(218) 00:14:33.372 fused_ordering(219) 00:14:33.372 fused_ordering(220) 00:14:33.372 fused_ordering(221) 00:14:33.372 fused_ordering(222) 00:14:33.372 fused_ordering(223) 00:14:33.372 fused_ordering(224) 00:14:33.372 fused_ordering(225) 00:14:33.372 fused_ordering(226) 00:14:33.372 fused_ordering(227) 00:14:33.372 fused_ordering(228) 00:14:33.372 fused_ordering(229) 00:14:33.372 fused_ordering(230) 00:14:33.372 fused_ordering(231) 00:14:33.372 fused_ordering(232) 00:14:33.372 fused_ordering(233) 00:14:33.372 fused_ordering(234) 00:14:33.372 fused_ordering(235) 00:14:33.372 fused_ordering(236) 00:14:33.372 fused_ordering(237) 00:14:33.372 fused_ordering(238) 00:14:33.372 fused_ordering(239) 00:14:33.372 fused_ordering(240) 00:14:33.372 fused_ordering(241) 00:14:33.372 fused_ordering(242) 00:14:33.372 fused_ordering(243) 00:14:33.372 fused_ordering(244) 00:14:33.372 fused_ordering(245) 00:14:33.372 fused_ordering(246) 00:14:33.372 fused_ordering(247) 00:14:33.372 fused_ordering(248) 00:14:33.372 fused_ordering(249) 00:14:33.372 fused_ordering(250) 00:14:33.372 fused_ordering(251) 00:14:33.372 fused_ordering(252) 00:14:33.372 fused_ordering(253) 00:14:33.372 fused_ordering(254) 00:14:33.372 fused_ordering(255) 00:14:33.372 fused_ordering(256) 00:14:33.372 fused_ordering(257) 00:14:33.372 fused_ordering(258) 00:14:33.372 fused_ordering(259) 00:14:33.372 fused_ordering(260) 00:14:33.372 fused_ordering(261) 00:14:33.372 fused_ordering(262) 00:14:33.372 fused_ordering(263) 00:14:33.372 fused_ordering(264) 00:14:33.372 fused_ordering(265) 00:14:33.372 fused_ordering(266) 00:14:33.372 fused_ordering(267) 00:14:33.372 fused_ordering(268) 00:14:33.372 fused_ordering(269) 00:14:33.372 fused_ordering(270) 00:14:33.372 fused_ordering(271) 00:14:33.372 fused_ordering(272) 00:14:33.372 fused_ordering(273) 00:14:33.372 fused_ordering(274) 00:14:33.372 fused_ordering(275) 00:14:33.372 fused_ordering(276) 00:14:33.372 fused_ordering(277) 00:14:33.372 fused_ordering(278) 00:14:33.372 fused_ordering(279) 00:14:33.372 fused_ordering(280) 00:14:33.372 fused_ordering(281) 00:14:33.372 fused_ordering(282) 00:14:33.372 fused_ordering(283) 00:14:33.372 fused_ordering(284) 00:14:33.372 fused_ordering(285) 00:14:33.372 fused_ordering(286) 00:14:33.372 fused_ordering(287) 00:14:33.372 fused_ordering(288) 00:14:33.372 fused_ordering(289) 00:14:33.372 fused_ordering(290) 00:14:33.372 fused_ordering(291) 00:14:33.372 fused_ordering(292) 00:14:33.372 fused_ordering(293) 00:14:33.372 fused_ordering(294) 00:14:33.372 fused_ordering(295) 00:14:33.372 fused_ordering(296) 00:14:33.372 fused_ordering(297) 00:14:33.372 fused_ordering(298) 00:14:33.372 fused_ordering(299) 00:14:33.372 fused_ordering(300) 00:14:33.372 fused_ordering(301) 00:14:33.372 fused_ordering(302) 00:14:33.372 fused_ordering(303) 00:14:33.372 fused_ordering(304) 00:14:33.372 fused_ordering(305) 00:14:33.372 fused_ordering(306) 00:14:33.372 fused_ordering(307) 00:14:33.372 fused_ordering(308) 00:14:33.372 fused_ordering(309) 00:14:33.372 fused_ordering(310) 00:14:33.372 fused_ordering(311) 00:14:33.372 fused_ordering(312) 00:14:33.372 fused_ordering(313) 00:14:33.372 fused_ordering(314) 00:14:33.372 fused_ordering(315) 00:14:33.372 fused_ordering(316) 00:14:33.372 fused_ordering(317) 00:14:33.372 fused_ordering(318) 00:14:33.372 fused_ordering(319) 00:14:33.372 fused_ordering(320) 00:14:33.372 fused_ordering(321) 00:14:33.372 fused_ordering(322) 00:14:33.372 fused_ordering(323) 00:14:33.373 fused_ordering(324) 00:14:33.373 fused_ordering(325) 00:14:33.373 fused_ordering(326) 00:14:33.373 fused_ordering(327) 00:14:33.373 fused_ordering(328) 00:14:33.373 fused_ordering(329) 00:14:33.373 fused_ordering(330) 00:14:33.373 fused_ordering(331) 00:14:33.373 fused_ordering(332) 00:14:33.373 fused_ordering(333) 00:14:33.373 fused_ordering(334) 00:14:33.373 fused_ordering(335) 00:14:33.373 fused_ordering(336) 00:14:33.373 fused_ordering(337) 00:14:33.373 fused_ordering(338) 00:14:33.373 fused_ordering(339) 00:14:33.373 fused_ordering(340) 00:14:33.373 fused_ordering(341) 00:14:33.373 fused_ordering(342) 00:14:33.373 fused_ordering(343) 00:14:33.373 fused_ordering(344) 00:14:33.373 fused_ordering(345) 00:14:33.373 fused_ordering(346) 00:14:33.373 fused_ordering(347) 00:14:33.373 fused_ordering(348) 00:14:33.373 fused_ordering(349) 00:14:33.373 fused_ordering(350) 00:14:33.373 fused_ordering(351) 00:14:33.373 fused_ordering(352) 00:14:33.373 fused_ordering(353) 00:14:33.373 fused_ordering(354) 00:14:33.373 fused_ordering(355) 00:14:33.373 fused_ordering(356) 00:14:33.373 fused_ordering(357) 00:14:33.373 fused_ordering(358) 00:14:33.373 fused_ordering(359) 00:14:33.373 fused_ordering(360) 00:14:33.373 fused_ordering(361) 00:14:33.373 fused_ordering(362) 00:14:33.373 fused_ordering(363) 00:14:33.373 fused_ordering(364) 00:14:33.373 fused_ordering(365) 00:14:33.373 fused_ordering(366) 00:14:33.373 fused_ordering(367) 00:14:33.373 fused_ordering(368) 00:14:33.373 fused_ordering(369) 00:14:33.373 fused_ordering(370) 00:14:33.373 fused_ordering(371) 00:14:33.373 fused_ordering(372) 00:14:33.373 fused_ordering(373) 00:14:33.373 fused_ordering(374) 00:14:33.373 fused_ordering(375) 00:14:33.373 fused_ordering(376) 00:14:33.373 fused_ordering(377) 00:14:33.373 fused_ordering(378) 00:14:33.373 fused_ordering(379) 00:14:33.373 fused_ordering(380) 00:14:33.373 fused_ordering(381) 00:14:33.373 fused_ordering(382) 00:14:33.373 fused_ordering(383) 00:14:33.373 fused_ordering(384) 00:14:33.373 fused_ordering(385) 00:14:33.373 fused_ordering(386) 00:14:33.373 fused_ordering(387) 00:14:33.373 fused_ordering(388) 00:14:33.373 fused_ordering(389) 00:14:33.373 fused_ordering(390) 00:14:33.373 fused_ordering(391) 00:14:33.373 fused_ordering(392) 00:14:33.373 fused_ordering(393) 00:14:33.373 fused_ordering(394) 00:14:33.373 fused_ordering(395) 00:14:33.373 fused_ordering(396) 00:14:33.373 fused_ordering(397) 00:14:33.373 fused_ordering(398) 00:14:33.373 fused_ordering(399) 00:14:33.373 fused_ordering(400) 00:14:33.373 fused_ordering(401) 00:14:33.373 fused_ordering(402) 00:14:33.373 fused_ordering(403) 00:14:33.373 fused_ordering(404) 00:14:33.373 fused_ordering(405) 00:14:33.373 fused_ordering(406) 00:14:33.373 fused_ordering(407) 00:14:33.373 fused_ordering(408) 00:14:33.373 fused_ordering(409) 00:14:33.373 fused_ordering(410) 00:14:33.631 fused_ordering(411) 00:14:33.631 fused_ordering(412) 00:14:33.631 fused_ordering(413) 00:14:33.631 fused_ordering(414) 00:14:33.631 fused_ordering(415) 00:14:33.631 fused_ordering(416) 00:14:33.631 fused_ordering(417) 00:14:33.631 fused_ordering(418) 00:14:33.631 fused_ordering(419) 00:14:33.631 fused_ordering(420) 00:14:33.631 fused_ordering(421) 00:14:33.631 fused_ordering(422) 00:14:33.631 fused_ordering(423) 00:14:33.631 fused_ordering(424) 00:14:33.631 fused_ordering(425) 00:14:33.631 fused_ordering(426) 00:14:33.631 fused_ordering(427) 00:14:33.631 fused_ordering(428) 00:14:33.631 fused_ordering(429) 00:14:33.631 fused_ordering(430) 00:14:33.631 fused_ordering(431) 00:14:33.631 fused_ordering(432) 00:14:33.631 fused_ordering(433) 00:14:33.631 fused_ordering(434) 00:14:33.631 fused_ordering(435) 00:14:33.631 fused_ordering(436) 00:14:33.631 fused_ordering(437) 00:14:33.631 fused_ordering(438) 00:14:33.631 fused_ordering(439) 00:14:33.631 fused_ordering(440) 00:14:33.631 fused_ordering(441) 00:14:33.631 fused_ordering(442) 00:14:33.631 fused_ordering(443) 00:14:33.631 fused_ordering(444) 00:14:33.631 fused_ordering(445) 00:14:33.631 fused_ordering(446) 00:14:33.631 fused_ordering(447) 00:14:33.631 fused_ordering(448) 00:14:33.631 fused_ordering(449) 00:14:33.631 fused_ordering(450) 00:14:33.631 fused_ordering(451) 00:14:33.631 fused_ordering(452) 00:14:33.631 fused_ordering(453) 00:14:33.631 fused_ordering(454) 00:14:33.631 fused_ordering(455) 00:14:33.631 fused_ordering(456) 00:14:33.631 fused_ordering(457) 00:14:33.631 fused_ordering(458) 00:14:33.631 fused_ordering(459) 00:14:33.631 fused_ordering(460) 00:14:33.631 fused_ordering(461) 00:14:33.631 fused_ordering(462) 00:14:33.631 fused_ordering(463) 00:14:33.631 fused_ordering(464) 00:14:33.631 fused_ordering(465) 00:14:33.631 fused_ordering(466) 00:14:33.631 fused_ordering(467) 00:14:33.631 fused_ordering(468) 00:14:33.631 fused_ordering(469) 00:14:33.631 fused_ordering(470) 00:14:33.631 fused_ordering(471) 00:14:33.631 fused_ordering(472) 00:14:33.631 fused_ordering(473) 00:14:33.631 fused_ordering(474) 00:14:33.631 fused_ordering(475) 00:14:33.631 fused_ordering(476) 00:14:33.631 fused_ordering(477) 00:14:33.631 fused_ordering(478) 00:14:33.631 fused_ordering(479) 00:14:33.631 fused_ordering(480) 00:14:33.631 fused_ordering(481) 00:14:33.631 fused_ordering(482) 00:14:33.631 fused_ordering(483) 00:14:33.631 fused_ordering(484) 00:14:33.631 fused_ordering(485) 00:14:33.631 fused_ordering(486) 00:14:33.631 fused_ordering(487) 00:14:33.631 fused_ordering(488) 00:14:33.631 fused_ordering(489) 00:14:33.631 fused_ordering(490) 00:14:33.631 fused_ordering(491) 00:14:33.631 fused_ordering(492) 00:14:33.631 fused_ordering(493) 00:14:33.631 fused_ordering(494) 00:14:33.632 fused_ordering(495) 00:14:33.632 fused_ordering(496) 00:14:33.632 fused_ordering(497) 00:14:33.632 fused_ordering(498) 00:14:33.632 fused_ordering(499) 00:14:33.632 fused_ordering(500) 00:14:33.632 fused_ordering(501) 00:14:33.632 fused_ordering(502) 00:14:33.632 fused_ordering(503) 00:14:33.632 fused_ordering(504) 00:14:33.632 fused_ordering(505) 00:14:33.632 fused_ordering(506) 00:14:33.632 fused_ordering(507) 00:14:33.632 fused_ordering(508) 00:14:33.632 fused_ordering(509) 00:14:33.632 fused_ordering(510) 00:14:33.632 fused_ordering(511) 00:14:33.632 fused_ordering(512) 00:14:33.632 fused_ordering(513) 00:14:33.632 fused_ordering(514) 00:14:33.632 fused_ordering(515) 00:14:33.632 fused_ordering(516) 00:14:33.632 fused_ordering(517) 00:14:33.632 fused_ordering(518) 00:14:33.632 fused_ordering(519) 00:14:33.632 fused_ordering(520) 00:14:33.632 fused_ordering(521) 00:14:33.632 fused_ordering(522) 00:14:33.632 fused_ordering(523) 00:14:33.632 fused_ordering(524) 00:14:33.632 fused_ordering(525) 00:14:33.632 fused_ordering(526) 00:14:33.632 fused_ordering(527) 00:14:33.632 fused_ordering(528) 00:14:33.632 fused_ordering(529) 00:14:33.632 fused_ordering(530) 00:14:33.632 fused_ordering(531) 00:14:33.632 fused_ordering(532) 00:14:33.632 fused_ordering(533) 00:14:33.632 fused_ordering(534) 00:14:33.632 fused_ordering(535) 00:14:33.632 fused_ordering(536) 00:14:33.632 fused_ordering(537) 00:14:33.632 fused_ordering(538) 00:14:33.632 fused_ordering(539) 00:14:33.632 fused_ordering(540) 00:14:33.632 fused_ordering(541) 00:14:33.632 fused_ordering(542) 00:14:33.632 fused_ordering(543) 00:14:33.632 fused_ordering(544) 00:14:33.632 fused_ordering(545) 00:14:33.632 fused_ordering(546) 00:14:33.632 fused_ordering(547) 00:14:33.632 fused_ordering(548) 00:14:33.632 fused_ordering(549) 00:14:33.632 fused_ordering(550) 00:14:33.632 fused_ordering(551) 00:14:33.632 fused_ordering(552) 00:14:33.632 fused_ordering(553) 00:14:33.632 fused_ordering(554) 00:14:33.632 fused_ordering(555) 00:14:33.632 fused_ordering(556) 00:14:33.632 fused_ordering(557) 00:14:33.632 fused_ordering(558) 00:14:33.632 fused_ordering(559) 00:14:33.632 fused_ordering(560) 00:14:33.632 fused_ordering(561) 00:14:33.632 fused_ordering(562) 00:14:33.632 fused_ordering(563) 00:14:33.632 fused_ordering(564) 00:14:33.632 fused_ordering(565) 00:14:33.632 fused_ordering(566) 00:14:33.632 fused_ordering(567) 00:14:33.632 fused_ordering(568) 00:14:33.632 fused_ordering(569) 00:14:33.632 fused_ordering(570) 00:14:33.632 fused_ordering(571) 00:14:33.632 fused_ordering(572) 00:14:33.632 fused_ordering(573) 00:14:33.632 fused_ordering(574) 00:14:33.632 fused_ordering(575) 00:14:33.632 fused_ordering(576) 00:14:33.632 fused_ordering(577) 00:14:33.632 fused_ordering(578) 00:14:33.632 fused_ordering(579) 00:14:33.632 fused_ordering(580) 00:14:33.632 fused_ordering(581) 00:14:33.632 fused_ordering(582) 00:14:33.632 fused_ordering(583) 00:14:33.632 fused_ordering(584) 00:14:33.632 fused_ordering(585) 00:14:33.632 fused_ordering(586) 00:14:33.632 fused_ordering(587) 00:14:33.632 fused_ordering(588) 00:14:33.632 fused_ordering(589) 00:14:33.632 fused_ordering(590) 00:14:33.632 fused_ordering(591) 00:14:33.632 fused_ordering(592) 00:14:33.632 fused_ordering(593) 00:14:33.632 fused_ordering(594) 00:14:33.632 fused_ordering(595) 00:14:33.632 fused_ordering(596) 00:14:33.632 fused_ordering(597) 00:14:33.632 fused_ordering(598) 00:14:33.632 fused_ordering(599) 00:14:33.632 fused_ordering(600) 00:14:33.632 fused_ordering(601) 00:14:33.632 fused_ordering(602) 00:14:33.632 fused_ordering(603) 00:14:33.632 fused_ordering(604) 00:14:33.632 fused_ordering(605) 00:14:33.632 fused_ordering(606) 00:14:33.632 fused_ordering(607) 00:14:33.632 fused_ordering(608) 00:14:33.632 fused_ordering(609) 00:14:33.632 fused_ordering(610) 00:14:33.632 fused_ordering(611) 00:14:33.632 fused_ordering(612) 00:14:33.632 fused_ordering(613) 00:14:33.632 fused_ordering(614) 00:14:33.632 fused_ordering(615) 00:14:34.199 fused_ordering(616) 00:14:34.199 fused_ordering(617) 00:14:34.199 fused_ordering(618) 00:14:34.199 fused_ordering(619) 00:14:34.199 fused_ordering(620) 00:14:34.199 fused_ordering(621) 00:14:34.199 fused_ordering(622) 00:14:34.199 fused_ordering(623) 00:14:34.199 fused_ordering(624) 00:14:34.199 fused_ordering(625) 00:14:34.199 fused_ordering(626) 00:14:34.199 fused_ordering(627) 00:14:34.199 fused_ordering(628) 00:14:34.199 fused_ordering(629) 00:14:34.199 fused_ordering(630) 00:14:34.199 fused_ordering(631) 00:14:34.199 fused_ordering(632) 00:14:34.199 fused_ordering(633) 00:14:34.199 fused_ordering(634) 00:14:34.199 fused_ordering(635) 00:14:34.199 fused_ordering(636) 00:14:34.199 fused_ordering(637) 00:14:34.199 fused_ordering(638) 00:14:34.199 fused_ordering(639) 00:14:34.199 fused_ordering(640) 00:14:34.199 fused_ordering(641) 00:14:34.199 fused_ordering(642) 00:14:34.199 fused_ordering(643) 00:14:34.199 fused_ordering(644) 00:14:34.199 fused_ordering(645) 00:14:34.199 fused_ordering(646) 00:14:34.199 fused_ordering(647) 00:14:34.199 fused_ordering(648) 00:14:34.199 fused_ordering(649) 00:14:34.199 fused_ordering(650) 00:14:34.199 fused_ordering(651) 00:14:34.199 fused_ordering(652) 00:14:34.199 fused_ordering(653) 00:14:34.199 fused_ordering(654) 00:14:34.199 fused_ordering(655) 00:14:34.199 fused_ordering(656) 00:14:34.199 fused_ordering(657) 00:14:34.199 fused_ordering(658) 00:14:34.199 fused_ordering(659) 00:14:34.199 fused_ordering(660) 00:14:34.199 fused_ordering(661) 00:14:34.199 fused_ordering(662) 00:14:34.199 fused_ordering(663) 00:14:34.199 fused_ordering(664) 00:14:34.199 fused_ordering(665) 00:14:34.199 fused_ordering(666) 00:14:34.199 fused_ordering(667) 00:14:34.199 fused_ordering(668) 00:14:34.199 fused_ordering(669) 00:14:34.199 fused_ordering(670) 00:14:34.199 fused_ordering(671) 00:14:34.199 fused_ordering(672) 00:14:34.199 fused_ordering(673) 00:14:34.199 fused_ordering(674) 00:14:34.199 fused_ordering(675) 00:14:34.199 fused_ordering(676) 00:14:34.199 fused_ordering(677) 00:14:34.199 fused_ordering(678) 00:14:34.199 fused_ordering(679) 00:14:34.199 fused_ordering(680) 00:14:34.199 fused_ordering(681) 00:14:34.199 fused_ordering(682) 00:14:34.199 fused_ordering(683) 00:14:34.199 fused_ordering(684) 00:14:34.199 fused_ordering(685) 00:14:34.199 fused_ordering(686) 00:14:34.199 fused_ordering(687) 00:14:34.199 fused_ordering(688) 00:14:34.199 fused_ordering(689) 00:14:34.199 fused_ordering(690) 00:14:34.199 fused_ordering(691) 00:14:34.199 fused_ordering(692) 00:14:34.199 fused_ordering(693) 00:14:34.199 fused_ordering(694) 00:14:34.199 fused_ordering(695) 00:14:34.199 fused_ordering(696) 00:14:34.199 fused_ordering(697) 00:14:34.199 fused_ordering(698) 00:14:34.199 fused_ordering(699) 00:14:34.199 fused_ordering(700) 00:14:34.199 fused_ordering(701) 00:14:34.199 fused_ordering(702) 00:14:34.199 fused_ordering(703) 00:14:34.199 fused_ordering(704) 00:14:34.200 fused_ordering(705) 00:14:34.200 fused_ordering(706) 00:14:34.200 fused_ordering(707) 00:14:34.200 fused_ordering(708) 00:14:34.200 fused_ordering(709) 00:14:34.200 fused_ordering(710) 00:14:34.200 fused_ordering(711) 00:14:34.200 fused_ordering(712) 00:14:34.200 fused_ordering(713) 00:14:34.200 fused_ordering(714) 00:14:34.200 fused_ordering(715) 00:14:34.200 fused_ordering(716) 00:14:34.200 fused_ordering(717) 00:14:34.200 fused_ordering(718) 00:14:34.200 fused_ordering(719) 00:14:34.200 fused_ordering(720) 00:14:34.200 fused_ordering(721) 00:14:34.200 fused_ordering(722) 00:14:34.200 fused_ordering(723) 00:14:34.200 fused_ordering(724) 00:14:34.200 fused_ordering(725) 00:14:34.200 fused_ordering(726) 00:14:34.200 fused_ordering(727) 00:14:34.200 fused_ordering(728) 00:14:34.200 fused_ordering(729) 00:14:34.200 fused_ordering(730) 00:14:34.200 fused_ordering(731) 00:14:34.200 fused_ordering(732) 00:14:34.200 fused_ordering(733) 00:14:34.200 fused_ordering(734) 00:14:34.200 fused_ordering(735) 00:14:34.200 fused_ordering(736) 00:14:34.200 fused_ordering(737) 00:14:34.200 fused_ordering(738) 00:14:34.200 fused_ordering(739) 00:14:34.200 fused_ordering(740) 00:14:34.200 fused_ordering(741) 00:14:34.200 fused_ordering(742) 00:14:34.200 fused_ordering(743) 00:14:34.200 fused_ordering(744) 00:14:34.200 fused_ordering(745) 00:14:34.200 fused_ordering(746) 00:14:34.200 fused_ordering(747) 00:14:34.200 fused_ordering(748) 00:14:34.200 fused_ordering(749) 00:14:34.200 fused_ordering(750) 00:14:34.200 fused_ordering(751) 00:14:34.200 fused_ordering(752) 00:14:34.200 fused_ordering(753) 00:14:34.200 fused_ordering(754) 00:14:34.200 fused_ordering(755) 00:14:34.200 fused_ordering(756) 00:14:34.200 fused_ordering(757) 00:14:34.200 fused_ordering(758) 00:14:34.200 fused_ordering(759) 00:14:34.200 fused_ordering(760) 00:14:34.200 fused_ordering(761) 00:14:34.200 fused_ordering(762) 00:14:34.200 fused_ordering(763) 00:14:34.200 fused_ordering(764) 00:14:34.200 fused_ordering(765) 00:14:34.200 fused_ordering(766) 00:14:34.200 fused_ordering(767) 00:14:34.200 fused_ordering(768) 00:14:34.200 fused_ordering(769) 00:14:34.200 fused_ordering(770) 00:14:34.200 fused_ordering(771) 00:14:34.200 fused_ordering(772) 00:14:34.200 fused_ordering(773) 00:14:34.200 fused_ordering(774) 00:14:34.200 fused_ordering(775) 00:14:34.200 fused_ordering(776) 00:14:34.200 fused_ordering(777) 00:14:34.200 fused_ordering(778) 00:14:34.200 fused_ordering(779) 00:14:34.200 fused_ordering(780) 00:14:34.200 fused_ordering(781) 00:14:34.200 fused_ordering(782) 00:14:34.200 fused_ordering(783) 00:14:34.200 fused_ordering(784) 00:14:34.200 fused_ordering(785) 00:14:34.200 fused_ordering(786) 00:14:34.200 fused_ordering(787) 00:14:34.200 fused_ordering(788) 00:14:34.200 fused_ordering(789) 00:14:34.200 fused_ordering(790) 00:14:34.200 fused_ordering(791) 00:14:34.200 fused_ordering(792) 00:14:34.200 fused_ordering(793) 00:14:34.200 fused_ordering(794) 00:14:34.200 fused_ordering(795) 00:14:34.200 fused_ordering(796) 00:14:34.200 fused_ordering(797) 00:14:34.200 fused_ordering(798) 00:14:34.200 fused_ordering(799) 00:14:34.200 fused_ordering(800) 00:14:34.200 fused_ordering(801) 00:14:34.200 fused_ordering(802) 00:14:34.200 fused_ordering(803) 00:14:34.200 fused_ordering(804) 00:14:34.200 fused_ordering(805) 00:14:34.200 fused_ordering(806) 00:14:34.200 fused_ordering(807) 00:14:34.200 fused_ordering(808) 00:14:34.200 fused_ordering(809) 00:14:34.200 fused_ordering(810) 00:14:34.200 fused_ordering(811) 00:14:34.200 fused_ordering(812) 00:14:34.200 fused_ordering(813) 00:14:34.200 fused_ordering(814) 00:14:34.200 fused_ordering(815) 00:14:34.200 fused_ordering(816) 00:14:34.200 fused_ordering(817) 00:14:34.200 fused_ordering(818) 00:14:34.200 fused_ordering(819) 00:14:34.200 fused_ordering(820) 00:14:34.460 fused_ordering(821) 00:14:34.460 fused_ordering(822) 00:14:34.460 fused_ordering(823) 00:14:34.460 fused_ordering(824) 00:14:34.460 fused_ordering(825) 00:14:34.460 fused_ordering(826) 00:14:34.460 fused_ordering(827) 00:14:34.460 fused_ordering(828) 00:14:34.460 fused_ordering(829) 00:14:34.460 fused_ordering(830) 00:14:34.460 fused_ordering(831) 00:14:34.460 fused_ordering(832) 00:14:34.460 fused_ordering(833) 00:14:34.460 fused_ordering(834) 00:14:34.460 fused_ordering(835) 00:14:34.460 fused_ordering(836) 00:14:34.460 fused_ordering(837) 00:14:34.460 fused_ordering(838) 00:14:34.460 fused_ordering(839) 00:14:34.460 fused_ordering(840) 00:14:34.460 fused_ordering(841) 00:14:34.460 fused_ordering(842) 00:14:34.460 fused_ordering(843) 00:14:34.460 fused_ordering(844) 00:14:34.460 fused_ordering(845) 00:14:34.460 fused_ordering(846) 00:14:34.460 fused_ordering(847) 00:14:34.460 fused_ordering(848) 00:14:34.460 fused_ordering(849) 00:14:34.460 fused_ordering(850) 00:14:34.460 fused_ordering(851) 00:14:34.460 fused_ordering(852) 00:14:34.460 fused_ordering(853) 00:14:34.460 fused_ordering(854) 00:14:34.460 fused_ordering(855) 00:14:34.460 fused_ordering(856) 00:14:34.460 fused_ordering(857) 00:14:34.460 fused_ordering(858) 00:14:34.460 fused_ordering(859) 00:14:34.460 fused_ordering(860) 00:14:34.460 fused_ordering(861) 00:14:34.460 fused_ordering(862) 00:14:34.460 fused_ordering(863) 00:14:34.460 fused_ordering(864) 00:14:34.460 fused_ordering(865) 00:14:34.460 fused_ordering(866) 00:14:34.460 fused_ordering(867) 00:14:34.460 fused_ordering(868) 00:14:34.460 fused_ordering(869) 00:14:34.460 fused_ordering(870) 00:14:34.460 fused_ordering(871) 00:14:34.460 fused_ordering(872) 00:14:34.460 fused_ordering(873) 00:14:34.460 fused_ordering(874) 00:14:34.460 fused_ordering(875) 00:14:34.460 fused_ordering(876) 00:14:34.460 fused_ordering(877) 00:14:34.460 fused_ordering(878) 00:14:34.460 fused_ordering(879) 00:14:34.460 fused_ordering(880) 00:14:34.460 fused_ordering(881) 00:14:34.460 fused_ordering(882) 00:14:34.460 fused_ordering(883) 00:14:34.460 fused_ordering(884) 00:14:34.460 fused_ordering(885) 00:14:34.460 fused_ordering(886) 00:14:34.460 fused_ordering(887) 00:14:34.460 fused_ordering(888) 00:14:34.460 fused_ordering(889) 00:14:34.460 fused_ordering(890) 00:14:34.460 fused_ordering(891) 00:14:34.460 fused_ordering(892) 00:14:34.460 fused_ordering(893) 00:14:34.460 fused_ordering(894) 00:14:34.460 fused_ordering(895) 00:14:34.460 fused_ordering(896) 00:14:34.460 fused_ordering(897) 00:14:34.460 fused_ordering(898) 00:14:34.460 fused_ordering(899) 00:14:34.460 fused_ordering(900) 00:14:34.460 fused_ordering(901) 00:14:34.460 fused_ordering(902) 00:14:34.460 fused_ordering(903) 00:14:34.460 fused_ordering(904) 00:14:34.460 fused_ordering(905) 00:14:34.460 fused_ordering(906) 00:14:34.460 fused_ordering(907) 00:14:34.460 fused_ordering(908) 00:14:34.460 fused_ordering(909) 00:14:34.460 fused_ordering(910) 00:14:34.460 fused_ordering(911) 00:14:34.460 fused_ordering(912) 00:14:34.460 fused_ordering(913) 00:14:34.460 fused_ordering(914) 00:14:34.460 fused_ordering(915) 00:14:34.460 fused_ordering(916) 00:14:34.460 fused_ordering(917) 00:14:34.460 fused_ordering(918) 00:14:34.460 fused_ordering(919) 00:14:34.460 fused_ordering(920) 00:14:34.460 fused_ordering(921) 00:14:34.460 fused_ordering(922) 00:14:34.460 fused_ordering(923) 00:14:34.460 fused_ordering(924) 00:14:34.460 fused_ordering(925) 00:14:34.460 fused_ordering(926) 00:14:34.460 fused_ordering(927) 00:14:34.460 fused_ordering(928) 00:14:34.460 fused_ordering(929) 00:14:34.460 fused_ordering(930) 00:14:34.460 fused_ordering(931) 00:14:34.460 fused_ordering(932) 00:14:34.460 fused_ordering(933) 00:14:34.460 fused_ordering(934) 00:14:34.460 fused_ordering(935) 00:14:34.460 fused_ordering(936) 00:14:34.460 fused_ordering(937) 00:14:34.460 fused_ordering(938) 00:14:34.460 fused_ordering(939) 00:14:34.460 fused_ordering(940) 00:14:34.460 fused_ordering(941) 00:14:34.460 fused_ordering(942) 00:14:34.460 fused_ordering(943) 00:14:34.460 fused_ordering(944) 00:14:34.460 fused_ordering(945) 00:14:34.460 fused_ordering(946) 00:14:34.460 fused_ordering(947) 00:14:34.460 fused_ordering(948) 00:14:34.460 fused_ordering(949) 00:14:34.460 fused_ordering(950) 00:14:34.460 fused_ordering(951) 00:14:34.460 fused_ordering(952) 00:14:34.460 fused_ordering(953) 00:14:34.460 fused_ordering(954) 00:14:34.460 fused_ordering(955) 00:14:34.460 fused_ordering(956) 00:14:34.460 fused_ordering(957) 00:14:34.460 fused_ordering(958) 00:14:34.460 fused_ordering(959) 00:14:34.460 fused_ordering(960) 00:14:34.460 fused_ordering(961) 00:14:34.460 fused_ordering(962) 00:14:34.460 fused_ordering(963) 00:14:34.460 fused_ordering(964) 00:14:34.460 fused_ordering(965) 00:14:34.461 fused_ordering(966) 00:14:34.461 fused_ordering(967) 00:14:34.461 fused_ordering(968) 00:14:34.461 fused_ordering(969) 00:14:34.461 fused_ordering(970) 00:14:34.461 fused_ordering(971) 00:14:34.461 fused_ordering(972) 00:14:34.461 fused_ordering(973) 00:14:34.461 fused_ordering(974) 00:14:34.461 fused_ordering(975) 00:14:34.461 fused_ordering(976) 00:14:34.461 fused_ordering(977) 00:14:34.461 fused_ordering(978) 00:14:34.461 fused_ordering(979) 00:14:34.461 fused_ordering(980) 00:14:34.461 fused_ordering(981) 00:14:34.461 fused_ordering(982) 00:14:34.461 fused_ordering(983) 00:14:34.461 fused_ordering(984) 00:14:34.461 fused_ordering(985) 00:14:34.461 fused_ordering(986) 00:14:34.461 fused_ordering(987) 00:14:34.461 fused_ordering(988) 00:14:34.461 fused_ordering(989) 00:14:34.461 fused_ordering(990) 00:14:34.461 fused_ordering(991) 00:14:34.461 fused_ordering(992) 00:14:34.461 fused_ordering(993) 00:14:34.461 fused_ordering(994) 00:14:34.461 fused_ordering(995) 00:14:34.461 fused_ordering(996) 00:14:34.461 fused_ordering(997) 00:14:34.461 fused_ordering(998) 00:14:34.461 fused_ordering(999) 00:14:34.461 fused_ordering(1000) 00:14:34.461 fused_ordering(1001) 00:14:34.461 fused_ordering(1002) 00:14:34.461 fused_ordering(1003) 00:14:34.461 fused_ordering(1004) 00:14:34.461 fused_ordering(1005) 00:14:34.461 fused_ordering(1006) 00:14:34.461 fused_ordering(1007) 00:14:34.461 fused_ordering(1008) 00:14:34.461 fused_ordering(1009) 00:14:34.461 fused_ordering(1010) 00:14:34.461 fused_ordering(1011) 00:14:34.461 fused_ordering(1012) 00:14:34.461 fused_ordering(1013) 00:14:34.461 fused_ordering(1014) 00:14:34.461 fused_ordering(1015) 00:14:34.461 fused_ordering(1016) 00:14:34.461 fused_ordering(1017) 00:14:34.461 fused_ordering(1018) 00:14:34.461 fused_ordering(1019) 00:14:34.461 fused_ordering(1020) 00:14:34.461 fused_ordering(1021) 00:14:34.461 fused_ordering(1022) 00:14:34.461 fused_ordering(1023) 00:14:34.461 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:34.461 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:34.461 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.461 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:34.461 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.461 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:34.461 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.461 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.461 rmmod nvme_tcp 00:14:34.461 rmmod nvme_fabrics 00:14:34.461 rmmod nvme_keyring 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 688249 ']' 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 688249 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 688249 ']' 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 688249 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 688249 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 688249' 00:14:34.720 killing process with pid 688249 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 688249 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 688249 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.720 07:24:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.255 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:37.255 00:14:37.255 real 0m10.444s 00:14:37.255 user 0m5.041s 00:14:37.255 sys 0m5.586s 00:14:37.255 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.255 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.255 ************************************ 00:14:37.255 END TEST nvmf_fused_ordering 00:14:37.255 ************************************ 00:14:37.255 07:24:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:37.255 07:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:37.255 07:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.255 07:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.255 ************************************ 00:14:37.255 START TEST nvmf_ns_masking 00:14:37.255 ************************************ 00:14:37.255 07:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:37.255 * Looking for test storage... 00:14:37.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.255 --rc genhtml_branch_coverage=1 00:14:37.255 --rc genhtml_function_coverage=1 00:14:37.255 --rc genhtml_legend=1 00:14:37.255 --rc geninfo_all_blocks=1 00:14:37.255 --rc geninfo_unexecuted_blocks=1 00:14:37.255 00:14:37.255 ' 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.255 --rc genhtml_branch_coverage=1 00:14:37.255 --rc genhtml_function_coverage=1 00:14:37.255 --rc genhtml_legend=1 00:14:37.255 --rc geninfo_all_blocks=1 00:14:37.255 --rc geninfo_unexecuted_blocks=1 00:14:37.255 00:14:37.255 ' 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.255 --rc genhtml_branch_coverage=1 00:14:37.255 --rc genhtml_function_coverage=1 00:14:37.255 --rc genhtml_legend=1 00:14:37.255 --rc geninfo_all_blocks=1 00:14:37.255 --rc geninfo_unexecuted_blocks=1 00:14:37.255 00:14:37.255 ' 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:37.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.255 --rc genhtml_branch_coverage=1 00:14:37.255 --rc genhtml_function_coverage=1 00:14:37.255 --rc genhtml_legend=1 00:14:37.255 --rc geninfo_all_blocks=1 00:14:37.255 --rc geninfo_unexecuted_blocks=1 00:14:37.255 00:14:37.255 ' 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:37.255 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:37.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4d915b30-3143-432c-a9f7-70e5c8f5b32d 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5bbd64ff-d077-46c9-8bb5-f7f226215520 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6fecc058-2339-4841-ae21-138a90d7dd05 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:37.256 07:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.527 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:42.528 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:42.528 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:42.528 Found net devices under 0000:86:00.0: cvl_0_0 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:42.528 Found net devices under 0000:86:00.1: cvl_0_1 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.528 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:42.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:14:42.787 00:14:42.787 --- 10.0.0.2 ping statistics --- 00:14:42.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.787 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:42.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:14:42.787 00:14:42.787 --- 10.0.0.1 ping statistics --- 00:14:42.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.787 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=692735 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 692735 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 692735 ']' 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.787 07:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:42.787 [2024-11-26 07:24:10.838206] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:14:42.787 [2024-11-26 07:24:10.838250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.045 [2024-11-26 07:24:10.904964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.045 [2024-11-26 07:24:10.946325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.045 [2024-11-26 07:24:10.946359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.045 [2024-11-26 07:24:10.946366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.045 [2024-11-26 07:24:10.946371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.045 [2024-11-26 07:24:10.946376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.045 [2024-11-26 07:24:10.946905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.045 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.045 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:43.045 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:43.045 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:43.045 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:43.045 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.045 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:43.302 [2024-11-26 07:24:11.246982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.302 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:43.302 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:43.302 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:43.560 Malloc1 00:14:43.560 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:43.818 Malloc2 00:14:43.818 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:43.818 07:24:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:44.077 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.335 [2024-11-26 07:24:12.247680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.335 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:44.335 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6fecc058-2339-4841-ae21-138a90d7dd05 -a 10.0.0.2 -s 4420 -i 4 00:14:44.335 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:44.335 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:44.335 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.335 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:44.335 07:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:46.866 [ 0]:0x1 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73579e5cd71c4708aecc27f232884028 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73579e5cd71c4708aecc27f232884028 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:46.866 [ 0]:0x1 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73579e5cd71c4708aecc27f232884028 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73579e5cd71c4708aecc27f232884028 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:46.866 [ 1]:0x2 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d2a64bdb53a4faeb2a57f25aad6bcee 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d2a64bdb53a4faeb2a57f25aad6bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:46.866 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.124 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.124 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:47.382 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:47.382 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6fecc058-2339-4841-ae21-138a90d7dd05 -a 10.0.0.2 -s 4420 -i 4 00:14:47.640 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:47.640 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:47.640 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.640 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:47.640 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:47.640 07:24:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:49.540 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:49.540 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:49.540 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.540 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:49.540 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.540 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:49.540 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:49.540 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:49.798 [ 0]:0x2 00:14:49.798 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:49.799 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.799 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d2a64bdb53a4faeb2a57f25aad6bcee 00:14:49.799 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d2a64bdb53a4faeb2a57f25aad6bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.799 07:24:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:50.056 [ 0]:0x1 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73579e5cd71c4708aecc27f232884028 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73579e5cd71c4708aecc27f232884028 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:50.056 [ 1]:0x2 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:50.056 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d2a64bdb53a4faeb2a57f25aad6bcee 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d2a64bdb53a4faeb2a57f25aad6bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:50.315 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:50.573 [ 0]:0x2 00:14:50.573 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:50.573 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:50.573 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d2a64bdb53a4faeb2a57f25aad6bcee 00:14:50.573 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d2a64bdb53a4faeb2a57f25aad6bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:50.573 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:50.573 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.573 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:50.831 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:50.831 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6fecc058-2339-4841-ae21-138a90d7dd05 -a 10.0.0.2 -s 4420 -i 4 00:14:50.831 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:50.831 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:50.831 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.831 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:50.831 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:50.831 07:24:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:53.356 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:53.356 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:53.356 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.356 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:53.356 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.357 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:53.357 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:53.357 07:24:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:53.357 [ 0]:0x1 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73579e5cd71c4708aecc27f232884028 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73579e5cd71c4708aecc27f232884028 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:53.357 [ 1]:0x2 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d2a64bdb53a4faeb2a57f25aad6bcee 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d2a64bdb53a4faeb2a57f25aad6bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.357 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:53.619 [ 0]:0x2 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d2a64bdb53a4faeb2a57f25aad6bcee 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d2a64bdb53a4faeb2a57f25aad6bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:53.619 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:53.879 [2024-11-26 07:24:21.754795] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:53.879 request: 00:14:53.879 { 00:14:53.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.879 "nsid": 2, 00:14:53.879 "host": "nqn.2016-06.io.spdk:host1", 00:14:53.879 "method": "nvmf_ns_remove_host", 00:14:53.879 "req_id": 1 00:14:53.879 } 00:14:53.879 Got JSON-RPC error response 00:14:53.879 response: 00:14:53.879 { 00:14:53.879 "code": -32602, 00:14:53.879 "message": "Invalid parameters" 00:14:53.879 } 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:53.879 [ 0]:0x2 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0d2a64bdb53a4faeb2a57f25aad6bcee 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0d2a64bdb53a4faeb2a57f25aad6bcee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=694732 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 694732 /var/tmp/host.sock 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 694732 ']' 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:53.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.879 07:24:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:53.879 [2024-11-26 07:24:21.963346] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:14:53.879 [2024-11-26 07:24:21.963390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid694732 ] 00:14:54.138 [2024-11-26 07:24:22.026459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.138 [2024-11-26 07:24:22.067207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.395 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.395 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:54.395 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.395 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:54.654 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4d915b30-3143-432c-a9f7-70e5c8f5b32d 00:14:54.654 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:54.654 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4D915B303143432CA9F770E5C8F5B32D -i 00:14:54.911 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5bbd64ff-d077-46c9-8bb5-f7f226215520 00:14:54.911 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:54.911 07:24:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5BBD64FFD07746C98BB5F7F226215520 -i 00:14:55.170 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:55.428 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:55.428 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:55.428 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:55.685 nvme0n1 00:14:55.685 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:55.685 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:55.942 nvme1n2 00:14:55.942 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:55.942 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:55.942 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:55.942 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:55.942 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:56.200 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:56.200 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:56.200 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:56.200 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:56.458 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4d915b30-3143-432c-a9f7-70e5c8f5b32d == \4\d\9\1\5\b\3\0\-\3\1\4\3\-\4\3\2\c\-\a\9\f\7\-\7\0\e\5\c\8\f\5\b\3\2\d ]] 00:14:56.458 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:56.458 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:56.458 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:56.714 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5bbd64ff-d077-46c9-8bb5-f7f226215520 == \5\b\b\d\6\4\f\f\-\d\0\7\7\-\4\6\c\9\-\8\b\b\5\-\f\7\f\2\2\6\2\1\5\5\2\0 ]] 00:14:56.714 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.714 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4d915b30-3143-432c-a9f7-70e5c8f5b32d 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4D915B303143432CA9F770E5C8F5B32D 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4D915B303143432CA9F770E5C8F5B32D 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:56.972 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4D915B303143432CA9F770E5C8F5B32D 00:14:57.229 [2024-11-26 07:24:25.168211] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:57.229 [2024-11-26 07:24:25.168244] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:57.229 [2024-11-26 07:24:25.168252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.229 request: 00:14:57.229 { 00:14:57.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:57.229 "namespace": { 00:14:57.230 "bdev_name": "invalid", 00:14:57.230 "nsid": 1, 00:14:57.230 "nguid": "4D915B303143432CA9F770E5C8F5B32D", 00:14:57.230 "no_auto_visible": false 00:14:57.230 }, 00:14:57.230 "method": "nvmf_subsystem_add_ns", 00:14:57.230 "req_id": 1 00:14:57.230 } 00:14:57.230 Got JSON-RPC error response 00:14:57.230 response: 00:14:57.230 { 00:14:57.230 "code": -32602, 00:14:57.230 "message": "Invalid parameters" 00:14:57.230 } 00:14:57.230 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:57.230 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:57.230 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:57.230 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:57.230 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4d915b30-3143-432c-a9f7-70e5c8f5b32d 00:14:57.230 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:57.230 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4D915B303143432CA9F770E5C8F5B32D -i 00:14:57.487 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:59.386 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:59.386 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:59.386 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 694732 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 694732 ']' 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 694732 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 694732 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 694732' 00:14:59.644 killing process with pid 694732 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 694732 00:14:59.644 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 694732 00:14:59.902 07:24:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:00.160 rmmod nvme_tcp 00:15:00.160 rmmod nvme_fabrics 00:15:00.160 rmmod nvme_keyring 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 692735 ']' 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 692735 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 692735 ']' 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 692735 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.160 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 692735 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 692735' 00:15:00.418 killing process with pid 692735 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 692735 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 692735 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.418 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.951 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:02.951 00:15:02.951 real 0m25.618s 00:15:02.951 user 0m30.661s 00:15:02.951 sys 0m6.749s 00:15:02.951 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.951 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.951 ************************************ 00:15:02.951 END TEST nvmf_ns_masking 00:15:02.951 ************************************ 00:15:02.951 07:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:02.951 07:24:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:02.952 ************************************ 00:15:02.952 START TEST nvmf_nvme_cli 00:15:02.952 ************************************ 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:02.952 * Looking for test storage... 00:15:02.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:02.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.952 --rc genhtml_branch_coverage=1 00:15:02.952 --rc genhtml_function_coverage=1 00:15:02.952 --rc genhtml_legend=1 00:15:02.952 --rc geninfo_all_blocks=1 00:15:02.952 --rc geninfo_unexecuted_blocks=1 00:15:02.952 00:15:02.952 ' 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:02.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.952 --rc genhtml_branch_coverage=1 00:15:02.952 --rc genhtml_function_coverage=1 00:15:02.952 --rc genhtml_legend=1 00:15:02.952 --rc geninfo_all_blocks=1 00:15:02.952 --rc geninfo_unexecuted_blocks=1 00:15:02.952 00:15:02.952 ' 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:02.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.952 --rc genhtml_branch_coverage=1 00:15:02.952 --rc genhtml_function_coverage=1 00:15:02.952 --rc genhtml_legend=1 00:15:02.952 --rc geninfo_all_blocks=1 00:15:02.952 --rc geninfo_unexecuted_blocks=1 00:15:02.952 00:15:02.952 ' 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:02.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.952 --rc genhtml_branch_coverage=1 00:15:02.952 --rc genhtml_function_coverage=1 00:15:02.952 --rc genhtml_legend=1 00:15:02.952 --rc geninfo_all_blocks=1 00:15:02.952 --rc geninfo_unexecuted_blocks=1 00:15:02.952 00:15:02.952 ' 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:02.952 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:02.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:02.953 07:24:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:08.221 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:08.221 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:08.221 Found net devices under 0000:86:00.0: cvl_0_0 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:08.221 Found net devices under 0000:86:00.1: cvl_0_1 00:15:08.221 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.222 07:24:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:08.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:15:08.222 00:15:08.222 --- 10.0.0.2 ping statistics --- 00:15:08.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.222 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:15:08.222 00:15:08.222 --- 10.0.0.1 ping statistics --- 00:15:08.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.222 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=699222 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 699222 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 699222 ']' 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:08.222 [2024-11-26 07:24:36.096225] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:15:08.222 [2024-11-26 07:24:36.096271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.222 [2024-11-26 07:24:36.162681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:08.222 [2024-11-26 07:24:36.206714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.222 [2024-11-26 07:24:36.206753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.222 [2024-11-26 07:24:36.206759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.222 [2024-11-26 07:24:36.206766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.222 [2024-11-26 07:24:36.206771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.222 [2024-11-26 07:24:36.208263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.222 [2024-11-26 07:24:36.208356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.222 [2024-11-26 07:24:36.208444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.222 [2024-11-26 07:24:36.208447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:08.222 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.480 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.480 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:08.480 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.480 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.481 [2024-11-26 07:24:36.345730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.481 Malloc0 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.481 Malloc1 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.481 [2024-11-26 07:24:36.447378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.481 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:08.739 00:15:08.739 Discovery Log Number of Records 2, Generation counter 2 00:15:08.739 =====Discovery Log Entry 0====== 00:15:08.739 trtype: tcp 00:15:08.739 adrfam: ipv4 00:15:08.739 subtype: current discovery subsystem 00:15:08.739 treq: not required 00:15:08.739 portid: 0 00:15:08.739 trsvcid: 4420 00:15:08.739 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:08.739 traddr: 10.0.0.2 00:15:08.739 eflags: explicit discovery connections, duplicate discovery information 00:15:08.739 sectype: none 00:15:08.739 =====Discovery Log Entry 1====== 00:15:08.739 trtype: tcp 00:15:08.739 adrfam: ipv4 00:15:08.739 subtype: nvme subsystem 00:15:08.739 treq: not required 00:15:08.739 portid: 0 00:15:08.739 trsvcid: 4420 00:15:08.739 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:08.739 traddr: 10.0.0.2 00:15:08.739 eflags: none 00:15:08.739 sectype: none 00:15:08.739 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:08.739 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:08.739 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:08.739 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:08.739 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:08.739 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:08.739 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:08.739 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:08.739 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:08.739 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:08.739 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:09.672 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:09.672 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:09.672 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.672 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:09.672 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:09.672 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:12.200 /dev/nvme0n2 ]] 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:12.200 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:12.200 rmmod nvme_tcp 00:15:12.200 rmmod nvme_fabrics 00:15:12.200 rmmod nvme_keyring 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 699222 ']' 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 699222 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 699222 ']' 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 699222 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 699222 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 699222' 00:15:12.200 killing process with pid 699222 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 699222 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 699222 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:12.200 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:12.201 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:12.201 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:12.201 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:12.201 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:12.201 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:12.201 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:12.458 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:12.458 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.458 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.458 07:24:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.360 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:14.360 00:15:14.360 real 0m11.760s 00:15:14.360 user 0m17.703s 00:15:14.360 sys 0m4.593s 00:15:14.360 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.360 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:14.360 ************************************ 00:15:14.360 END TEST nvmf_nvme_cli 00:15:14.360 ************************************ 00:15:14.360 07:24:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:14.360 07:24:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:14.360 07:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:14.360 07:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.360 07:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:14.360 ************************************ 00:15:14.360 START TEST nvmf_vfio_user 00:15:14.360 ************************************ 00:15:14.360 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:14.619 * Looking for test storage... 00:15:14.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:14.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.619 --rc genhtml_branch_coverage=1 00:15:14.619 --rc genhtml_function_coverage=1 00:15:14.619 --rc genhtml_legend=1 00:15:14.619 --rc geninfo_all_blocks=1 00:15:14.619 --rc geninfo_unexecuted_blocks=1 00:15:14.619 00:15:14.619 ' 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:14.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.619 --rc genhtml_branch_coverage=1 00:15:14.619 --rc genhtml_function_coverage=1 00:15:14.619 --rc genhtml_legend=1 00:15:14.619 --rc geninfo_all_blocks=1 00:15:14.619 --rc geninfo_unexecuted_blocks=1 00:15:14.619 00:15:14.619 ' 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:14.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.619 --rc genhtml_branch_coverage=1 00:15:14.619 --rc genhtml_function_coverage=1 00:15:14.619 --rc genhtml_legend=1 00:15:14.619 --rc geninfo_all_blocks=1 00:15:14.619 --rc geninfo_unexecuted_blocks=1 00:15:14.619 00:15:14.619 ' 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:14.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.619 --rc genhtml_branch_coverage=1 00:15:14.619 --rc genhtml_function_coverage=1 00:15:14.619 --rc genhtml_legend=1 00:15:14.619 --rc geninfo_all_blocks=1 00:15:14.619 --rc geninfo_unexecuted_blocks=1 00:15:14.619 00:15:14.619 ' 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.619 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=700366 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 700366' 00:15:14.620 Process pid: 700366 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 700366 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 700366 ']' 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.620 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:14.620 [2024-11-26 07:24:42.693623] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:15:14.620 [2024-11-26 07:24:42.693674] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.878 [2024-11-26 07:24:42.757681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:14.878 [2024-11-26 07:24:42.798291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.878 [2024-11-26 07:24:42.798334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.878 [2024-11-26 07:24:42.798342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.878 [2024-11-26 07:24:42.798348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.878 [2024-11-26 07:24:42.798353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.878 [2024-11-26 07:24:42.799938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.878 [2024-11-26 07:24:42.800036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.878 [2024-11-26 07:24:42.800058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.878 [2024-11-26 07:24:42.800059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.878 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.878 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:14.878 07:24:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:16.249 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:16.249 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:16.249 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:16.249 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:16.249 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:16.249 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:16.249 Malloc1 00:15:16.508 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:16.508 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:16.766 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:17.025 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:17.025 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:17.025 07:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:17.284 Malloc2 00:15:17.284 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:17.284 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:17.542 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:17.801 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:17.801 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:17.801 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:17.801 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:17.801 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:17.801 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:17.801 [2024-11-26 07:24:45.812320] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:15:17.801 [2024-11-26 07:24:45.812363] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid700997 ] 00:15:17.801 [2024-11-26 07:24:45.851880] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:17.801 [2024-11-26 07:24:45.856211] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:17.801 [2024-11-26 07:24:45.856232] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb1d5b8d000 00:15:17.801 [2024-11-26 07:24:45.857211] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:17.801 [2024-11-26 07:24:45.858215] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:17.801 [2024-11-26 07:24:45.859225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:17.801 [2024-11-26 07:24:45.860232] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:17.801 [2024-11-26 07:24:45.861234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:17.801 [2024-11-26 07:24:45.862241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:17.801 [2024-11-26 07:24:45.863243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:17.801 [2024-11-26 07:24:45.864248] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:17.801 [2024-11-26 07:24:45.865254] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:17.801 [2024-11-26 07:24:45.865263] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb1d5b82000 00:15:17.801 [2024-11-26 07:24:45.866205] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:17.801 [2024-11-26 07:24:45.880394] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:17.801 [2024-11-26 07:24:45.880424] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:17.801 [2024-11-26 07:24:45.883369] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:17.801 [2024-11-26 07:24:45.883407] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:17.801 [2024-11-26 07:24:45.883471] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:17.801 [2024-11-26 07:24:45.883485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:17.801 [2024-11-26 07:24:45.883491] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:17.801 [2024-11-26 07:24:45.884364] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:17.801 [2024-11-26 07:24:45.884372] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:17.801 [2024-11-26 07:24:45.884379] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:17.801 [2024-11-26 07:24:45.885372] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:17.801 [2024-11-26 07:24:45.885380] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:17.801 [2024-11-26 07:24:45.885387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:17.801 [2024-11-26 07:24:45.886377] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:17.801 [2024-11-26 07:24:45.886386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:17.801 [2024-11-26 07:24:45.890952] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:17.801 [2024-11-26 07:24:45.890961] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:17.801 [2024-11-26 07:24:45.890965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:17.801 [2024-11-26 07:24:45.890971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:17.801 [2024-11-26 07:24:45.891079] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:17.801 [2024-11-26 07:24:45.891083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:17.801 [2024-11-26 07:24:45.891088] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:17.801 [2024-11-26 07:24:45.891408] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:17.802 [2024-11-26 07:24:45.892415] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:17.802 [2024-11-26 07:24:45.893420] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:17.802 [2024-11-26 07:24:45.894420] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.802 [2024-11-26 07:24:45.894507] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:17.802 [2024-11-26 07:24:45.895437] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:17.802 [2024-11-26 07:24:45.895445] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:17.802 [2024-11-26 07:24:45.895449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:17.802 [2024-11-26 07:24:45.895466] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:17.802 [2024-11-26 07:24:45.895477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:17.802 [2024-11-26 07:24:45.895490] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:17.802 [2024-11-26 07:24:45.895495] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:17.802 [2024-11-26 07:24:45.895499] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:17.802 [2024-11-26 07:24:45.895510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:17.802 [2024-11-26 07:24:45.895555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:17.802 [2024-11-26 07:24:45.895563] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:17.802 [2024-11-26 07:24:45.895568] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:17.802 [2024-11-26 07:24:45.895572] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:17.802 [2024-11-26 07:24:45.895576] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:17.802 [2024-11-26 07:24:45.895584] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:17.802 [2024-11-26 07:24:45.895589] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:17.802 [2024-11-26 07:24:45.895593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:17.802 [2024-11-26 07:24:45.895600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:17.802 [2024-11-26 07:24:45.895610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:17.802 [2024-11-26 07:24:45.895623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:17.802 [2024-11-26 07:24:45.895633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.802 [2024-11-26 07:24:45.895640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.802 [2024-11-26 07:24:45.895648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.802 [2024-11-26 07:24:45.895655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:17.802 [2024-11-26 07:24:45.895662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:17.802 [2024-11-26 07:24:45.895668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:17.802 [2024-11-26 07:24:45.895676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:17.802 [2024-11-26 07:24:45.895684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:17.802 [2024-11-26 07:24:45.895691] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:17.802 [2024-11-26 07:24:45.895696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:17.802 [2024-11-26 07:24:45.895702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:17.802 [2024-11-26 07:24:45.895707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:17.802 [2024-11-26 07:24:45.895715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:17.802 [2024-11-26 07:24:45.895724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:17.802 [2024-11-26 07:24:45.895775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:17.802 [2024-11-26 07:24:45.895782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:17.802 [2024-11-26 07:24:45.895789] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:17.802 [2024-11-26 07:24:45.895793] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:17.802 [2024-11-26 07:24:45.895796] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.062 [2024-11-26 07:24:45.895802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:18.062 [2024-11-26 07:24:45.895817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:18.062 [2024-11-26 07:24:45.895826] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:18.062 [2024-11-26 07:24:45.895834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:18.062 [2024-11-26 07:24:45.895841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:18.062 [2024-11-26 07:24:45.895847] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.062 [2024-11-26 07:24:45.895851] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.062 [2024-11-26 07:24:45.895854] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.062 [2024-11-26 07:24:45.895860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.062 [2024-11-26 07:24:45.895878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:18.062 [2024-11-26 07:24:45.895889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:18.062 [2024-11-26 07:24:45.895897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:18.062 [2024-11-26 07:24:45.895904] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:18.062 [2024-11-26 07:24:45.895908] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.062 [2024-11-26 07:24:45.895911] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.062 [2024-11-26 07:24:45.895916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.062 [2024-11-26 07:24:45.895926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:18.062 [2024-11-26 07:24:45.895933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:18.062 [2024-11-26 07:24:45.895940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:18.062 [2024-11-26 07:24:45.895952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:18.062 [2024-11-26 07:24:45.895959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:18.062 [2024-11-26 07:24:45.895963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:18.062 [2024-11-26 07:24:45.895968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:18.062 [2024-11-26 07:24:45.895972] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:18.062 [2024-11-26 07:24:45.895976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:18.062 [2024-11-26 07:24:45.895981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:18.062 [2024-11-26 07:24:45.895998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:18.062 [2024-11-26 07:24:45.896007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:18.062 [2024-11-26 07:24:45.896017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:18.062 [2024-11-26 07:24:45.896025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:18.062 [2024-11-26 07:24:45.896035] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:18.062 [2024-11-26 07:24:45.896048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:18.062 [2024-11-26 07:24:45.896058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:18.062 [2024-11-26 07:24:45.896068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:18.062 [2024-11-26 07:24:45.896080] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:18.062 [2024-11-26 07:24:45.896085] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:18.062 [2024-11-26 07:24:45.896089] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:18.062 [2024-11-26 07:24:45.896093] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:18.062 [2024-11-26 07:24:45.896096] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:18.062 [2024-11-26 07:24:45.896101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:18.062 [2024-11-26 07:24:45.896108] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:18.062 [2024-11-26 07:24:45.896112] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:18.062 [2024-11-26 07:24:45.896115] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.062 [2024-11-26 07:24:45.896121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:18.062 [2024-11-26 07:24:45.896127] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:18.062 [2024-11-26 07:24:45.896131] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:18.062 [2024-11-26 07:24:45.896134] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.062 [2024-11-26 07:24:45.896140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:18.062 [2024-11-26 07:24:45.896147] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:18.062 [2024-11-26 07:24:45.896150] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:18.062 [2024-11-26 07:24:45.896154] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:18.062 [2024-11-26 07:24:45.896159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:18.062 [2024-11-26 07:24:45.896165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:18.062 [2024-11-26 07:24:45.896175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:18.062 [2024-11-26 07:24:45.896186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:18.062 [2024-11-26 07:24:45.896193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:18.062 ===================================================== 00:15:18.062 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:18.062 ===================================================== 00:15:18.062 Controller Capabilities/Features 00:15:18.062 ================================ 00:15:18.062 Vendor ID: 4e58 00:15:18.062 Subsystem Vendor ID: 4e58 00:15:18.062 Serial Number: SPDK1 00:15:18.062 Model Number: SPDK bdev Controller 00:15:18.062 Firmware Version: 25.01 00:15:18.062 Recommended Arb Burst: 6 00:15:18.062 IEEE OUI Identifier: 8d 6b 50 00:15:18.062 Multi-path I/O 00:15:18.062 May have multiple subsystem ports: Yes 00:15:18.062 May have multiple controllers: Yes 00:15:18.062 Associated with SR-IOV VF: No 00:15:18.062 Max Data Transfer Size: 131072 00:15:18.062 Max Number of Namespaces: 32 00:15:18.062 Max Number of I/O Queues: 127 00:15:18.062 NVMe Specification Version (VS): 1.3 00:15:18.062 NVMe Specification Version (Identify): 1.3 00:15:18.062 Maximum Queue Entries: 256 00:15:18.062 Contiguous Queues Required: Yes 00:15:18.062 Arbitration Mechanisms Supported 00:15:18.062 Weighted Round Robin: Not Supported 00:15:18.062 Vendor Specific: Not Supported 00:15:18.062 Reset Timeout: 15000 ms 00:15:18.062 Doorbell Stride: 4 bytes 00:15:18.062 NVM Subsystem Reset: Not Supported 00:15:18.062 Command Sets Supported 00:15:18.062 NVM Command Set: Supported 00:15:18.062 Boot Partition: Not Supported 00:15:18.062 Memory Page Size Minimum: 4096 bytes 00:15:18.062 Memory Page Size Maximum: 4096 bytes 00:15:18.062 Persistent Memory Region: Not Supported 00:15:18.063 Optional Asynchronous Events Supported 00:15:18.063 Namespace Attribute Notices: Supported 00:15:18.063 Firmware Activation Notices: Not Supported 00:15:18.063 ANA Change Notices: Not Supported 00:15:18.063 PLE Aggregate Log Change Notices: Not Supported 00:15:18.063 LBA Status Info Alert Notices: Not Supported 00:15:18.063 EGE Aggregate Log Change Notices: Not Supported 00:15:18.063 Normal NVM Subsystem Shutdown event: Not Supported 00:15:18.063 Zone Descriptor Change Notices: Not Supported 00:15:18.063 Discovery Log Change Notices: Not Supported 00:15:18.063 Controller Attributes 00:15:18.063 128-bit Host Identifier: Supported 00:15:18.063 Non-Operational Permissive Mode: Not Supported 00:15:18.063 NVM Sets: Not Supported 00:15:18.063 Read Recovery Levels: Not Supported 00:15:18.063 Endurance Groups: Not Supported 00:15:18.063 Predictable Latency Mode: Not Supported 00:15:18.063 Traffic Based Keep ALive: Not Supported 00:15:18.063 Namespace Granularity: Not Supported 00:15:18.063 SQ Associations: Not Supported 00:15:18.063 UUID List: Not Supported 00:15:18.063 Multi-Domain Subsystem: Not Supported 00:15:18.063 Fixed Capacity Management: Not Supported 00:15:18.063 Variable Capacity Management: Not Supported 00:15:18.063 Delete Endurance Group: Not Supported 00:15:18.063 Delete NVM Set: Not Supported 00:15:18.063 Extended LBA Formats Supported: Not Supported 00:15:18.063 Flexible Data Placement Supported: Not Supported 00:15:18.063 00:15:18.063 Controller Memory Buffer Support 00:15:18.063 ================================ 00:15:18.063 Supported: No 00:15:18.063 00:15:18.063 Persistent Memory Region Support 00:15:18.063 ================================ 00:15:18.063 Supported: No 00:15:18.063 00:15:18.063 Admin Command Set Attributes 00:15:18.063 ============================ 00:15:18.063 Security Send/Receive: Not Supported 00:15:18.063 Format NVM: Not Supported 00:15:18.063 Firmware Activate/Download: Not Supported 00:15:18.063 Namespace Management: Not Supported 00:15:18.063 Device Self-Test: Not Supported 00:15:18.063 Directives: Not Supported 00:15:18.063 NVMe-MI: Not Supported 00:15:18.063 Virtualization Management: Not Supported 00:15:18.063 Doorbell Buffer Config: Not Supported 00:15:18.063 Get LBA Status Capability: Not Supported 00:15:18.063 Command & Feature Lockdown Capability: Not Supported 00:15:18.063 Abort Command Limit: 4 00:15:18.063 Async Event Request Limit: 4 00:15:18.063 Number of Firmware Slots: N/A 00:15:18.063 Firmware Slot 1 Read-Only: N/A 00:15:18.063 Firmware Activation Without Reset: N/A 00:15:18.063 Multiple Update Detection Support: N/A 00:15:18.063 Firmware Update Granularity: No Information Provided 00:15:18.063 Per-Namespace SMART Log: No 00:15:18.063 Asymmetric Namespace Access Log Page: Not Supported 00:15:18.063 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:18.063 Command Effects Log Page: Supported 00:15:18.063 Get Log Page Extended Data: Supported 00:15:18.063 Telemetry Log Pages: Not Supported 00:15:18.063 Persistent Event Log Pages: Not Supported 00:15:18.063 Supported Log Pages Log Page: May Support 00:15:18.063 Commands Supported & Effects Log Page: Not Supported 00:15:18.063 Feature Identifiers & Effects Log Page:May Support 00:15:18.063 NVMe-MI Commands & Effects Log Page: May Support 00:15:18.063 Data Area 4 for Telemetry Log: Not Supported 00:15:18.063 Error Log Page Entries Supported: 128 00:15:18.063 Keep Alive: Supported 00:15:18.063 Keep Alive Granularity: 10000 ms 00:15:18.063 00:15:18.063 NVM Command Set Attributes 00:15:18.063 ========================== 00:15:18.063 Submission Queue Entry Size 00:15:18.063 Max: 64 00:15:18.063 Min: 64 00:15:18.063 Completion Queue Entry Size 00:15:18.063 Max: 16 00:15:18.063 Min: 16 00:15:18.063 Number of Namespaces: 32 00:15:18.063 Compare Command: Supported 00:15:18.063 Write Uncorrectable Command: Not Supported 00:15:18.063 Dataset Management Command: Supported 00:15:18.063 Write Zeroes Command: Supported 00:15:18.063 Set Features Save Field: Not Supported 00:15:18.063 Reservations: Not Supported 00:15:18.063 Timestamp: Not Supported 00:15:18.063 Copy: Supported 00:15:18.063 Volatile Write Cache: Present 00:15:18.063 Atomic Write Unit (Normal): 1 00:15:18.063 Atomic Write Unit (PFail): 1 00:15:18.063 Atomic Compare & Write Unit: 1 00:15:18.063 Fused Compare & Write: Supported 00:15:18.063 Scatter-Gather List 00:15:18.063 SGL Command Set: Supported (Dword aligned) 00:15:18.063 SGL Keyed: Not Supported 00:15:18.063 SGL Bit Bucket Descriptor: Not Supported 00:15:18.063 SGL Metadata Pointer: Not Supported 00:15:18.063 Oversized SGL: Not Supported 00:15:18.063 SGL Metadata Address: Not Supported 00:15:18.063 SGL Offset: Not Supported 00:15:18.063 Transport SGL Data Block: Not Supported 00:15:18.063 Replay Protected Memory Block: Not Supported 00:15:18.063 00:15:18.063 Firmware Slot Information 00:15:18.063 ========================= 00:15:18.063 Active slot: 1 00:15:18.063 Slot 1 Firmware Revision: 25.01 00:15:18.063 00:15:18.063 00:15:18.063 Commands Supported and Effects 00:15:18.063 ============================== 00:15:18.063 Admin Commands 00:15:18.063 -------------- 00:15:18.063 Get Log Page (02h): Supported 00:15:18.063 Identify (06h): Supported 00:15:18.063 Abort (08h): Supported 00:15:18.063 Set Features (09h): Supported 00:15:18.063 Get Features (0Ah): Supported 00:15:18.063 Asynchronous Event Request (0Ch): Supported 00:15:18.063 Keep Alive (18h): Supported 00:15:18.063 I/O Commands 00:15:18.063 ------------ 00:15:18.063 Flush (00h): Supported LBA-Change 00:15:18.063 Write (01h): Supported LBA-Change 00:15:18.063 Read (02h): Supported 00:15:18.063 Compare (05h): Supported 00:15:18.063 Write Zeroes (08h): Supported LBA-Change 00:15:18.063 Dataset Management (09h): Supported LBA-Change 00:15:18.063 Copy (19h): Supported LBA-Change 00:15:18.063 00:15:18.063 Error Log 00:15:18.063 ========= 00:15:18.063 00:15:18.063 Arbitration 00:15:18.063 =========== 00:15:18.063 Arbitration Burst: 1 00:15:18.063 00:15:18.063 Power Management 00:15:18.063 ================ 00:15:18.063 Number of Power States: 1 00:15:18.063 Current Power State: Power State #0 00:15:18.063 Power State #0: 00:15:18.063 Max Power: 0.00 W 00:15:18.063 Non-Operational State: Operational 00:15:18.063 Entry Latency: Not Reported 00:15:18.063 Exit Latency: Not Reported 00:15:18.063 Relative Read Throughput: 0 00:15:18.063 Relative Read Latency: 0 00:15:18.063 Relative Write Throughput: 0 00:15:18.063 Relative Write Latency: 0 00:15:18.063 Idle Power: Not Reported 00:15:18.063 Active Power: Not Reported 00:15:18.063 Non-Operational Permissive Mode: Not Supported 00:15:18.063 00:15:18.063 Health Information 00:15:18.063 ================== 00:15:18.063 Critical Warnings: 00:15:18.063 Available Spare Space: OK 00:15:18.063 Temperature: OK 00:15:18.063 Device Reliability: OK 00:15:18.063 Read Only: No 00:15:18.063 Volatile Memory Backup: OK 00:15:18.063 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:18.063 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:18.063 Available Spare: 0% 00:15:18.063 Available Sp[2024-11-26 07:24:45.896280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:18.063 [2024-11-26 07:24:45.896287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:18.063 [2024-11-26 07:24:45.896312] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:18.063 [2024-11-26 07:24:45.896320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.063 [2024-11-26 07:24:45.896326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.063 [2024-11-26 07:24:45.896332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.063 [2024-11-26 07:24:45.896337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:18.063 [2024-11-26 07:24:45.896444] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:18.063 [2024-11-26 07:24:45.896456] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:18.063 [2024-11-26 07:24:45.897446] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.063 [2024-11-26 07:24:45.897496] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:18.063 [2024-11-26 07:24:45.897503] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:18.063 [2024-11-26 07:24:45.898450] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:18.063 [2024-11-26 07:24:45.898460] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:18.063 [2024-11-26 07:24:45.898508] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:18.064 [2024-11-26 07:24:45.900488] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:18.064 are Threshold: 0% 00:15:18.064 Life Percentage Used: 0% 00:15:18.064 Data Units Read: 0 00:15:18.064 Data Units Written: 0 00:15:18.064 Host Read Commands: 0 00:15:18.064 Host Write Commands: 0 00:15:18.064 Controller Busy Time: 0 minutes 00:15:18.064 Power Cycles: 0 00:15:18.064 Power On Hours: 0 hours 00:15:18.064 Unsafe Shutdowns: 0 00:15:18.064 Unrecoverable Media Errors: 0 00:15:18.064 Lifetime Error Log Entries: 0 00:15:18.064 Warning Temperature Time: 0 minutes 00:15:18.064 Critical Temperature Time: 0 minutes 00:15:18.064 00:15:18.064 Number of Queues 00:15:18.064 ================ 00:15:18.064 Number of I/O Submission Queues: 127 00:15:18.064 Number of I/O Completion Queues: 127 00:15:18.064 00:15:18.064 Active Namespaces 00:15:18.064 ================= 00:15:18.064 Namespace ID:1 00:15:18.064 Error Recovery Timeout: Unlimited 00:15:18.064 Command Set Identifier: NVM (00h) 00:15:18.064 Deallocate: Supported 00:15:18.064 Deallocated/Unwritten Error: Not Supported 00:15:18.064 Deallocated Read Value: Unknown 00:15:18.064 Deallocate in Write Zeroes: Not Supported 00:15:18.064 Deallocated Guard Field: 0xFFFF 00:15:18.064 Flush: Supported 00:15:18.064 Reservation: Supported 00:15:18.064 Namespace Sharing Capabilities: Multiple Controllers 00:15:18.064 Size (in LBAs): 131072 (0GiB) 00:15:18.064 Capacity (in LBAs): 131072 (0GiB) 00:15:18.064 Utilization (in LBAs): 131072 (0GiB) 00:15:18.064 NGUID: 6DD90117FDBB4D70BDBAC4CCCFA0E101 00:15:18.064 UUID: 6dd90117-fdbb-4d70-bdba-c4cccfa0e101 00:15:18.064 Thin Provisioning: Not Supported 00:15:18.064 Per-NS Atomic Units: Yes 00:15:18.064 Atomic Boundary Size (Normal): 0 00:15:18.064 Atomic Boundary Size (PFail): 0 00:15:18.064 Atomic Boundary Offset: 0 00:15:18.064 Maximum Single Source Range Length: 65535 00:15:18.064 Maximum Copy Length: 65535 00:15:18.064 Maximum Source Range Count: 1 00:15:18.064 NGUID/EUI64 Never Reused: No 00:15:18.064 Namespace Write Protected: No 00:15:18.064 Number of LBA Formats: 1 00:15:18.064 Current LBA Format: LBA Format #00 00:15:18.064 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:18.064 00:15:18.064 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:18.064 [2024-11-26 07:24:46.138784] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:23.325 Initializing NVMe Controllers 00:15:23.325 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:23.325 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:23.325 Initialization complete. Launching workers. 00:15:23.325 ======================================================== 00:15:23.325 Latency(us) 00:15:23.325 Device Information : IOPS MiB/s Average min max 00:15:23.325 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39894.56 155.84 3208.27 972.91 10358.55 00:15:23.325 ======================================================== 00:15:23.325 Total : 39894.56 155.84 3208.27 972.91 10358.55 00:15:23.325 00:15:23.325 [2024-11-26 07:24:51.155915] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.325 07:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:23.325 [2024-11-26 07:24:51.401041] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:28.586 Initializing NVMe Controllers 00:15:28.586 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:28.586 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:28.586 Initialization complete. Launching workers. 00:15:28.586 ======================================================== 00:15:28.586 Latency(us) 00:15:28.586 Device Information : IOPS MiB/s Average min max 00:15:28.586 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.26 62.70 7980.28 6008.83 15476.06 00:15:28.586 ======================================================== 00:15:28.586 Total : 16050.26 62.70 7980.28 6008.83 15476.06 00:15:28.586 00:15:28.586 [2024-11-26 07:24:56.441463] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:28.586 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:28.586 [2024-11-26 07:24:56.645434] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.862 [2024-11-26 07:25:01.724262] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.862 Initializing NVMe Controllers 00:15:33.862 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:33.862 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:33.862 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:33.862 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:33.862 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:33.862 Initialization complete. Launching workers. 00:15:33.862 Starting thread on core 2 00:15:33.862 Starting thread on core 3 00:15:33.862 Starting thread on core 1 00:15:33.862 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:34.121 [2024-11-26 07:25:02.031376] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:37.403 [2024-11-26 07:25:05.100146] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:37.403 Initializing NVMe Controllers 00:15:37.403 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.403 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.403 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:37.403 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:37.403 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:37.403 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:37.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:37.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:37.403 Initialization complete. Launching workers. 00:15:37.403 Starting thread on core 1 with urgent priority queue 00:15:37.403 Starting thread on core 2 with urgent priority queue 00:15:37.403 Starting thread on core 3 with urgent priority queue 00:15:37.403 Starting thread on core 0 with urgent priority queue 00:15:37.403 SPDK bdev Controller (SPDK1 ) core 0: 5509.00 IO/s 18.15 secs/100000 ios 00:15:37.403 SPDK bdev Controller (SPDK1 ) core 1: 5872.33 IO/s 17.03 secs/100000 ios 00:15:37.403 SPDK bdev Controller (SPDK1 ) core 2: 5379.67 IO/s 18.59 secs/100000 ios 00:15:37.403 SPDK bdev Controller (SPDK1 ) core 3: 5350.67 IO/s 18.69 secs/100000 ios 00:15:37.403 ======================================================== 00:15:37.403 00:15:37.403 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:37.404 [2024-11-26 07:25:05.388379] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:37.404 Initializing NVMe Controllers 00:15:37.404 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.404 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:37.404 Namespace ID: 1 size: 0GB 00:15:37.404 Initialization complete. 00:15:37.404 INFO: using host memory buffer for IO 00:15:37.404 Hello world! 00:15:37.404 [2024-11-26 07:25:05.422622] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:37.404 07:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:37.661 [2024-11-26 07:25:05.707355] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:39.036 Initializing NVMe Controllers 00:15:39.036 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:39.036 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:39.036 Initialization complete. Launching workers. 00:15:39.036 submit (in ns) avg, min, max = 7399.9, 3267.8, 3999627.0 00:15:39.036 complete (in ns) avg, min, max = 20647.8, 1773.0, 3998237.4 00:15:39.036 00:15:39.036 Submit histogram 00:15:39.036 ================ 00:15:39.036 Range in us Cumulative Count 00:15:39.036 3.256 - 3.270: 0.0122% ( 2) 00:15:39.036 3.270 - 3.283: 0.0366% ( 4) 00:15:39.036 3.283 - 3.297: 0.1038% ( 11) 00:15:39.037 3.297 - 3.311: 0.1893% ( 14) 00:15:39.037 3.311 - 3.325: 0.3602% ( 28) 00:15:39.037 3.325 - 3.339: 1.1539% ( 130) 00:15:39.037 3.339 - 3.353: 3.9135% ( 452) 00:15:39.037 3.353 - 3.367: 8.7124% ( 786) 00:15:39.037 3.367 - 3.381: 14.6956% ( 980) 00:15:39.037 3.381 - 3.395: 20.8316% ( 1005) 00:15:39.037 3.395 - 3.409: 27.3887% ( 1074) 00:15:39.037 3.409 - 3.423: 32.9263% ( 907) 00:15:39.037 3.423 - 3.437: 38.1586% ( 857) 00:15:39.037 3.437 - 3.450: 43.6290% ( 896) 00:15:39.037 3.450 - 3.464: 47.7929% ( 682) 00:15:39.037 3.464 - 3.478: 52.0911% ( 704) 00:15:39.037 3.478 - 3.492: 57.0731% ( 816) 00:15:39.037 3.492 - 3.506: 64.3934% ( 1199) 00:15:39.037 3.506 - 3.520: 69.7173% ( 872) 00:15:39.037 3.520 - 3.534: 74.1132% ( 720) 00:15:39.037 3.534 - 3.548: 79.3333% ( 855) 00:15:39.037 3.548 - 3.562: 83.4117% ( 668) 00:15:39.037 3.562 - 3.590: 87.0444% ( 595) 00:15:39.037 3.590 - 3.617: 87.8930% ( 139) 00:15:39.037 3.617 - 3.645: 88.6501% ( 124) 00:15:39.037 3.645 - 3.673: 90.1642% ( 248) 00:15:39.037 3.673 - 3.701: 92.0569% ( 310) 00:15:39.037 3.701 - 3.729: 93.5161% ( 239) 00:15:39.037 3.729 - 3.757: 95.1096% ( 261) 00:15:39.037 3.757 - 3.784: 96.8191% ( 280) 00:15:39.037 3.784 - 3.812: 98.1195% ( 213) 00:15:39.037 3.812 - 3.840: 98.7362% ( 101) 00:15:39.037 3.840 - 3.868: 99.1391% ( 66) 00:15:39.037 3.868 - 3.896: 99.4139% ( 45) 00:15:39.037 3.896 - 3.923: 99.5360% ( 20) 00:15:39.037 3.923 - 3.951: 99.5726% ( 6) 00:15:39.037 3.951 - 3.979: 99.5848% ( 2) 00:15:39.037 4.035 - 4.063: 99.5909% ( 1) 00:15:39.037 4.090 - 4.118: 99.5970% ( 1) 00:15:39.037 5.315 - 5.343: 99.6032% ( 1) 00:15:39.037 5.398 - 5.426: 99.6093% ( 1) 00:15:39.037 5.454 - 5.482: 99.6154% ( 1) 00:15:39.037 5.565 - 5.593: 99.6276% ( 2) 00:15:39.037 5.649 - 5.677: 99.6520% ( 4) 00:15:39.037 5.760 - 5.788: 99.6581% ( 1) 00:15:39.037 5.843 - 5.871: 99.6642% ( 1) 00:15:39.037 6.038 - 6.066: 99.6703% ( 1) 00:15:39.037 6.066 - 6.094: 99.6764% ( 1) 00:15:39.037 6.122 - 6.150: 99.6825% ( 1) 00:15:39.037 6.177 - 6.205: 99.6886% ( 1) 00:15:39.037 6.289 - 6.317: 99.6947% ( 1) 00:15:39.037 6.317 - 6.344: 99.7008% ( 1) 00:15:39.037 6.344 - 6.372: 99.7069% ( 1) 00:15:39.037 6.428 - 6.456: 99.7192% ( 2) 00:15:39.037 6.483 - 6.511: 99.7253% ( 1) 00:15:39.037 6.511 - 6.539: 99.7314% ( 1) 00:15:39.037 6.539 - 6.567: 99.7375% ( 1) 00:15:39.037 6.567 - 6.595: 99.7436% ( 1) 00:15:39.037 6.595 - 6.623: 99.7497% ( 1) 00:15:39.037 6.623 - 6.650: 99.7558% ( 1) 00:15:39.037 6.650 - 6.678: 99.7619% ( 1) 00:15:39.037 6.678 - 6.706: 99.7680% ( 1) 00:15:39.037 6.790 - 6.817: 99.7802% ( 2) 00:15:39.037 6.901 - 6.929: 99.7924% ( 2) 00:15:39.037 6.957 - 6.984: 99.7985% ( 1) 00:15:39.037 6.984 - 7.012: 99.8046% ( 1) 00:15:39.037 7.012 - 7.040: 99.8107% ( 1) 00:15:39.037 7.040 - 7.068: 99.8168% ( 1) 00:15:39.037 7.346 - 7.402: 99.8229% ( 1) 00:15:39.037 7.402 - 7.457: 99.8290% ( 1) 00:15:39.037 7.513 - 7.569: 99.8413% ( 2) 00:15:39.037 7.569 - 7.624: 99.8474% ( 1) 00:15:39.037 7.624 - 7.680: 99.8596% ( 2) 00:15:39.037 7.736 - 7.791: 99.8657% ( 1) 00:15:39.037 7.847 - 7.903: 99.8779% ( 2) 00:15:39.037 7.903 - 7.958: 99.8840% ( 1) 00:15:39.037 8.125 - 8.181: 99.8901% ( 1) 00:15:39.037 10.017 - 10.073: 99.8962% ( 1) 00:15:39.037 13.913 - 13.969: 99.9023% ( 1) 00:15:39.037 [2024-11-26 07:25:06.726221] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:39.037 3989.148 - 4017.642: 100.0000% ( 16) 00:15:39.037 00:15:39.037 Complete histogram 00:15:39.037 ================== 00:15:39.037 Range in us Cumulative Count 00:15:39.037 1.767 - 1.774: 0.0061% ( 1) 00:15:39.037 1.774 - 1.781: 0.0672% ( 10) 00:15:39.037 1.781 - 1.795: 0.2015% ( 22) 00:15:39.037 1.795 - 1.809: 0.2442% ( 7) 00:15:39.037 1.809 - 1.823: 0.6411% ( 65) 00:15:39.037 1.823 - 1.837: 19.0793% ( 3020) 00:15:39.037 1.837 - 1.850: 46.3032% ( 4459) 00:15:39.037 1.850 - 1.864: 50.7662% ( 731) 00:15:39.037 1.864 - 1.878: 58.5872% ( 1281) 00:15:39.037 1.878 - 1.892: 82.9416% ( 3989) 00:15:39.037 1.892 - 1.906: 92.1790% ( 1513) 00:15:39.037 1.906 - 1.920: 96.2269% ( 663) 00:15:39.037 1.920 - 1.934: 97.5151% ( 211) 00:15:39.037 1.934 - 1.948: 97.8448% ( 54) 00:15:39.037 1.948 - 1.962: 98.5897% ( 122) 00:15:39.037 1.962 - 1.976: 99.1269% ( 88) 00:15:39.037 1.976 - 1.990: 99.2979% ( 28) 00:15:39.037 1.990 - 2.003: 99.3284% ( 5) 00:15:39.037 2.003 - 2.017: 99.3467% ( 3) 00:15:39.037 2.031 - 2.045: 99.3589% ( 2) 00:15:39.037 2.073 - 2.087: 99.3650% ( 1) 00:15:39.037 2.240 - 2.254: 99.3711% ( 1) 00:15:39.037 3.617 - 3.645: 99.3773% ( 1) 00:15:39.037 3.951 - 3.979: 99.3834% ( 1) 00:15:39.037 4.007 - 4.035: 99.3956% ( 2) 00:15:39.037 4.035 - 4.063: 99.4017% ( 1) 00:15:39.037 4.146 - 4.174: 99.4078% ( 1) 00:15:39.037 4.257 - 4.285: 99.4139% ( 1) 00:15:39.037 4.313 - 4.341: 99.4261% ( 2) 00:15:39.037 4.341 - 4.369: 99.4322% ( 1) 00:15:39.037 4.452 - 4.480: 99.4383% ( 1) 00:15:39.037 4.730 - 4.758: 99.4444% ( 1) 00:15:39.037 4.842 - 4.870: 99.4505% ( 1) 00:15:39.037 4.953 - 4.981: 99.4566% ( 1) 00:15:39.037 5.120 - 5.148: 99.4627% ( 1) 00:15:39.037 5.176 - 5.203: 99.4688% ( 1) 00:15:39.037 5.510 - 5.537: 99.4749% ( 1) 00:15:39.037 5.537 - 5.565: 99.4810% ( 1) 00:15:39.037 5.593 - 5.621: 99.4871% ( 1) 00:15:39.037 5.649 - 5.677: 99.4994% ( 2) 00:15:39.037 5.899 - 5.927: 99.5116% ( 2) 00:15:39.037 5.955 - 5.983: 99.5177% ( 1) 00:15:39.037 8.070 - 8.125: 99.5238% ( 1) 00:15:39.037 8.125 - 8.181: 99.5299% ( 1) 00:15:39.037 3989.148 - 4017.642: 100.0000% ( 77) 00:15:39.037 00:15:39.037 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:39.037 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:39.037 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:39.037 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:39.037 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:39.037 [ 00:15:39.037 { 00:15:39.037 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:39.037 "subtype": "Discovery", 00:15:39.037 "listen_addresses": [], 00:15:39.037 "allow_any_host": true, 00:15:39.037 "hosts": [] 00:15:39.037 }, 00:15:39.037 { 00:15:39.037 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:39.037 "subtype": "NVMe", 00:15:39.037 "listen_addresses": [ 00:15:39.037 { 00:15:39.037 "trtype": "VFIOUSER", 00:15:39.037 "adrfam": "IPv4", 00:15:39.037 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:39.037 "trsvcid": "0" 00:15:39.037 } 00:15:39.037 ], 00:15:39.037 "allow_any_host": true, 00:15:39.037 "hosts": [], 00:15:39.037 "serial_number": "SPDK1", 00:15:39.037 "model_number": "SPDK bdev Controller", 00:15:39.037 "max_namespaces": 32, 00:15:39.037 "min_cntlid": 1, 00:15:39.037 "max_cntlid": 65519, 00:15:39.037 "namespaces": [ 00:15:39.037 { 00:15:39.037 "nsid": 1, 00:15:39.037 "bdev_name": "Malloc1", 00:15:39.037 "name": "Malloc1", 00:15:39.037 "nguid": "6DD90117FDBB4D70BDBAC4CCCFA0E101", 00:15:39.037 "uuid": "6dd90117-fdbb-4d70-bdba-c4cccfa0e101" 00:15:39.037 } 00:15:39.037 ] 00:15:39.037 }, 00:15:39.037 { 00:15:39.037 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:39.037 "subtype": "NVMe", 00:15:39.037 "listen_addresses": [ 00:15:39.037 { 00:15:39.037 "trtype": "VFIOUSER", 00:15:39.037 "adrfam": "IPv4", 00:15:39.037 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:39.037 "trsvcid": "0" 00:15:39.037 } 00:15:39.037 ], 00:15:39.037 "allow_any_host": true, 00:15:39.037 "hosts": [], 00:15:39.037 "serial_number": "SPDK2", 00:15:39.037 "model_number": "SPDK bdev Controller", 00:15:39.037 "max_namespaces": 32, 00:15:39.037 "min_cntlid": 1, 00:15:39.037 "max_cntlid": 65519, 00:15:39.037 "namespaces": [ 00:15:39.037 { 00:15:39.037 "nsid": 1, 00:15:39.037 "bdev_name": "Malloc2", 00:15:39.037 "name": "Malloc2", 00:15:39.037 "nguid": "5291EC7E8C544B18B92E41F92216EA15", 00:15:39.038 "uuid": "5291ec7e-8c54-4b18-b92e-41f92216ea15" 00:15:39.038 } 00:15:39.038 ] 00:15:39.038 } 00:15:39.038 ] 00:15:39.038 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:39.038 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:39.038 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=704456 00:15:39.038 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:39.038 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:39.038 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:39.038 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:39.038 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:39.038 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:39.038 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:39.296 [2024-11-26 07:25:07.134335] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:39.296 Malloc3 00:15:39.296 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:39.296 [2024-11-26 07:25:07.375058] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:39.554 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:39.554 Asynchronous Event Request test 00:15:39.554 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:39.554 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:39.554 Registering asynchronous event callbacks... 00:15:39.554 Starting namespace attribute notice tests for all controllers... 00:15:39.554 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:39.554 aer_cb - Changed Namespace 00:15:39.554 Cleaning up... 00:15:39.554 [ 00:15:39.554 { 00:15:39.554 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:39.554 "subtype": "Discovery", 00:15:39.554 "listen_addresses": [], 00:15:39.554 "allow_any_host": true, 00:15:39.554 "hosts": [] 00:15:39.554 }, 00:15:39.554 { 00:15:39.554 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:39.554 "subtype": "NVMe", 00:15:39.554 "listen_addresses": [ 00:15:39.554 { 00:15:39.554 "trtype": "VFIOUSER", 00:15:39.554 "adrfam": "IPv4", 00:15:39.554 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:39.554 "trsvcid": "0" 00:15:39.554 } 00:15:39.554 ], 00:15:39.554 "allow_any_host": true, 00:15:39.554 "hosts": [], 00:15:39.554 "serial_number": "SPDK1", 00:15:39.554 "model_number": "SPDK bdev Controller", 00:15:39.554 "max_namespaces": 32, 00:15:39.554 "min_cntlid": 1, 00:15:39.554 "max_cntlid": 65519, 00:15:39.554 "namespaces": [ 00:15:39.554 { 00:15:39.554 "nsid": 1, 00:15:39.554 "bdev_name": "Malloc1", 00:15:39.554 "name": "Malloc1", 00:15:39.554 "nguid": "6DD90117FDBB4D70BDBAC4CCCFA0E101", 00:15:39.554 "uuid": "6dd90117-fdbb-4d70-bdba-c4cccfa0e101" 00:15:39.554 }, 00:15:39.554 { 00:15:39.554 "nsid": 2, 00:15:39.554 "bdev_name": "Malloc3", 00:15:39.554 "name": "Malloc3", 00:15:39.554 "nguid": "2E13A8F7F5F94A579F379E6DBB59984D", 00:15:39.554 "uuid": "2e13a8f7-f5f9-4a57-9f37-9e6dbb59984d" 00:15:39.554 } 00:15:39.554 ] 00:15:39.554 }, 00:15:39.554 { 00:15:39.554 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:39.554 "subtype": "NVMe", 00:15:39.554 "listen_addresses": [ 00:15:39.554 { 00:15:39.554 "trtype": "VFIOUSER", 00:15:39.554 "adrfam": "IPv4", 00:15:39.554 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:39.554 "trsvcid": "0" 00:15:39.554 } 00:15:39.554 ], 00:15:39.554 "allow_any_host": true, 00:15:39.554 "hosts": [], 00:15:39.554 "serial_number": "SPDK2", 00:15:39.554 "model_number": "SPDK bdev Controller", 00:15:39.554 "max_namespaces": 32, 00:15:39.554 "min_cntlid": 1, 00:15:39.554 "max_cntlid": 65519, 00:15:39.554 "namespaces": [ 00:15:39.554 { 00:15:39.554 "nsid": 1, 00:15:39.554 "bdev_name": "Malloc2", 00:15:39.554 "name": "Malloc2", 00:15:39.554 "nguid": "5291EC7E8C544B18B92E41F92216EA15", 00:15:39.554 "uuid": "5291ec7e-8c54-4b18-b92e-41f92216ea15" 00:15:39.554 } 00:15:39.554 ] 00:15:39.554 } 00:15:39.554 ] 00:15:39.554 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 704456 00:15:39.554 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.554 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:39.554 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:39.554 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:39.554 [2024-11-26 07:25:07.612131] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:15:39.554 [2024-11-26 07:25:07.612166] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704476 ] 00:15:39.815 [2024-11-26 07:25:07.650776] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:39.815 [2024-11-26 07:25:07.655052] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:39.815 [2024-11-26 07:25:07.655076] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3a10318000 00:15:39.815 [2024-11-26 07:25:07.656052] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.815 [2024-11-26 07:25:07.657054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.815 [2024-11-26 07:25:07.658064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.815 [2024-11-26 07:25:07.659073] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:39.815 [2024-11-26 07:25:07.660083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:39.815 [2024-11-26 07:25:07.661083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.815 [2024-11-26 07:25:07.662097] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:39.815 [2024-11-26 07:25:07.663101] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:39.815 [2024-11-26 07:25:07.664114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:39.815 [2024-11-26 07:25:07.664124] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3a1030d000 00:15:39.815 [2024-11-26 07:25:07.665064] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:39.815 [2024-11-26 07:25:07.678581] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:39.815 [2024-11-26 07:25:07.678605] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:39.815 [2024-11-26 07:25:07.680671] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:39.815 [2024-11-26 07:25:07.680711] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:39.815 [2024-11-26 07:25:07.680780] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:39.815 [2024-11-26 07:25:07.680792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:39.815 [2024-11-26 07:25:07.680797] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:39.815 [2024-11-26 07:25:07.681674] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:39.815 [2024-11-26 07:25:07.681683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:39.815 [2024-11-26 07:25:07.681690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:39.815 [2024-11-26 07:25:07.682674] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:39.815 [2024-11-26 07:25:07.682683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:39.815 [2024-11-26 07:25:07.682690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:39.815 [2024-11-26 07:25:07.683683] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:39.815 [2024-11-26 07:25:07.683693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:39.815 [2024-11-26 07:25:07.684690] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:39.815 [2024-11-26 07:25:07.684699] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:39.815 [2024-11-26 07:25:07.684704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:39.815 [2024-11-26 07:25:07.684709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:39.815 [2024-11-26 07:25:07.684817] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:39.815 [2024-11-26 07:25:07.684821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:39.815 [2024-11-26 07:25:07.684826] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:39.815 [2024-11-26 07:25:07.685701] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:39.815 [2024-11-26 07:25:07.686702] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:39.815 [2024-11-26 07:25:07.687707] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:39.815 [2024-11-26 07:25:07.688711] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.815 [2024-11-26 07:25:07.688750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:39.815 [2024-11-26 07:25:07.689727] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:39.815 [2024-11-26 07:25:07.689736] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:39.815 [2024-11-26 07:25:07.689741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:39.815 [2024-11-26 07:25:07.689758] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:39.815 [2024-11-26 07:25:07.689765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:39.815 [2024-11-26 07:25:07.689776] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:39.815 [2024-11-26 07:25:07.689781] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.815 [2024-11-26 07:25:07.689784] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.815 [2024-11-26 07:25:07.689795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.815 [2024-11-26 07:25:07.695954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:39.815 [2024-11-26 07:25:07.695965] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:39.815 [2024-11-26 07:25:07.695971] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:39.815 [2024-11-26 07:25:07.695976] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:39.815 [2024-11-26 07:25:07.695980] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:39.815 [2024-11-26 07:25:07.695986] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:39.815 [2024-11-26 07:25:07.695991] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:39.815 [2024-11-26 07:25:07.695995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:39.815 [2024-11-26 07:25:07.696003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:39.815 [2024-11-26 07:25:07.696013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:39.815 [2024-11-26 07:25:07.703954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:39.815 [2024-11-26 07:25:07.703967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.815 [2024-11-26 07:25:07.703975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.815 [2024-11-26 07:25:07.703982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.815 [2024-11-26 07:25:07.703989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.815 [2024-11-26 07:25:07.703993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.703999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.704007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.711954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.711964] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:39.816 [2024-11-26 07:25:07.711969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.711975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.711980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.711988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.719952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.720006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.720017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.720025] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:39.816 [2024-11-26 07:25:07.720029] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:39.816 [2024-11-26 07:25:07.720032] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.816 [2024-11-26 07:25:07.720038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.727951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.727962] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:39.816 [2024-11-26 07:25:07.727973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.727980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.727987] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:39.816 [2024-11-26 07:25:07.727991] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.816 [2024-11-26 07:25:07.727994] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.816 [2024-11-26 07:25:07.727999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.735953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.735967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.735975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.735981] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:39.816 [2024-11-26 07:25:07.735985] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.816 [2024-11-26 07:25:07.735989] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.816 [2024-11-26 07:25:07.735995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.743952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.743961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.743967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.743974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.743979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.743984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.743990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.743995] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:39.816 [2024-11-26 07:25:07.743999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:39.816 [2024-11-26 07:25:07.744003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:39.816 [2024-11-26 07:25:07.744019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.751951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.751963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.763953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.763967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.771952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.771964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.779955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.779973] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:39.816 [2024-11-26 07:25:07.779978] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:39.816 [2024-11-26 07:25:07.779981] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:39.816 [2024-11-26 07:25:07.779984] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:39.816 [2024-11-26 07:25:07.779987] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:39.816 [2024-11-26 07:25:07.779993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:39.816 [2024-11-26 07:25:07.780000] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:39.816 [2024-11-26 07:25:07.780004] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:39.816 [2024-11-26 07:25:07.780007] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.816 [2024-11-26 07:25:07.780013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.780019] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:39.816 [2024-11-26 07:25:07.780022] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:39.816 [2024-11-26 07:25:07.780025] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.816 [2024-11-26 07:25:07.780031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.780037] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:39.816 [2024-11-26 07:25:07.780042] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:39.816 [2024-11-26 07:25:07.780047] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:39.816 [2024-11-26 07:25:07.780053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:39.816 [2024-11-26 07:25:07.787955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.787969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.787979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:39.816 [2024-11-26 07:25:07.787985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:39.816 ===================================================== 00:15:39.816 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:39.816 ===================================================== 00:15:39.816 Controller Capabilities/Features 00:15:39.816 ================================ 00:15:39.816 Vendor ID: 4e58 00:15:39.816 Subsystem Vendor ID: 4e58 00:15:39.816 Serial Number: SPDK2 00:15:39.816 Model Number: SPDK bdev Controller 00:15:39.816 Firmware Version: 25.01 00:15:39.816 Recommended Arb Burst: 6 00:15:39.816 IEEE OUI Identifier: 8d 6b 50 00:15:39.816 Multi-path I/O 00:15:39.816 May have multiple subsystem ports: Yes 00:15:39.816 May have multiple controllers: Yes 00:15:39.816 Associated with SR-IOV VF: No 00:15:39.816 Max Data Transfer Size: 131072 00:15:39.816 Max Number of Namespaces: 32 00:15:39.816 Max Number of I/O Queues: 127 00:15:39.816 NVMe Specification Version (VS): 1.3 00:15:39.816 NVMe Specification Version (Identify): 1.3 00:15:39.816 Maximum Queue Entries: 256 00:15:39.816 Contiguous Queues Required: Yes 00:15:39.816 Arbitration Mechanisms Supported 00:15:39.816 Weighted Round Robin: Not Supported 00:15:39.816 Vendor Specific: Not Supported 00:15:39.816 Reset Timeout: 15000 ms 00:15:39.816 Doorbell Stride: 4 bytes 00:15:39.816 NVM Subsystem Reset: Not Supported 00:15:39.817 Command Sets Supported 00:15:39.817 NVM Command Set: Supported 00:15:39.817 Boot Partition: Not Supported 00:15:39.817 Memory Page Size Minimum: 4096 bytes 00:15:39.817 Memory Page Size Maximum: 4096 bytes 00:15:39.817 Persistent Memory Region: Not Supported 00:15:39.817 Optional Asynchronous Events Supported 00:15:39.817 Namespace Attribute Notices: Supported 00:15:39.817 Firmware Activation Notices: Not Supported 00:15:39.817 ANA Change Notices: Not Supported 00:15:39.817 PLE Aggregate Log Change Notices: Not Supported 00:15:39.817 LBA Status Info Alert Notices: Not Supported 00:15:39.817 EGE Aggregate Log Change Notices: Not Supported 00:15:39.817 Normal NVM Subsystem Shutdown event: Not Supported 00:15:39.817 Zone Descriptor Change Notices: Not Supported 00:15:39.817 Discovery Log Change Notices: Not Supported 00:15:39.817 Controller Attributes 00:15:39.817 128-bit Host Identifier: Supported 00:15:39.817 Non-Operational Permissive Mode: Not Supported 00:15:39.817 NVM Sets: Not Supported 00:15:39.817 Read Recovery Levels: Not Supported 00:15:39.817 Endurance Groups: Not Supported 00:15:39.817 Predictable Latency Mode: Not Supported 00:15:39.817 Traffic Based Keep ALive: Not Supported 00:15:39.817 Namespace Granularity: Not Supported 00:15:39.817 SQ Associations: Not Supported 00:15:39.817 UUID List: Not Supported 00:15:39.817 Multi-Domain Subsystem: Not Supported 00:15:39.817 Fixed Capacity Management: Not Supported 00:15:39.817 Variable Capacity Management: Not Supported 00:15:39.817 Delete Endurance Group: Not Supported 00:15:39.817 Delete NVM Set: Not Supported 00:15:39.817 Extended LBA Formats Supported: Not Supported 00:15:39.817 Flexible Data Placement Supported: Not Supported 00:15:39.817 00:15:39.817 Controller Memory Buffer Support 00:15:39.817 ================================ 00:15:39.817 Supported: No 00:15:39.817 00:15:39.817 Persistent Memory Region Support 00:15:39.817 ================================ 00:15:39.817 Supported: No 00:15:39.817 00:15:39.817 Admin Command Set Attributes 00:15:39.817 ============================ 00:15:39.817 Security Send/Receive: Not Supported 00:15:39.817 Format NVM: Not Supported 00:15:39.817 Firmware Activate/Download: Not Supported 00:15:39.817 Namespace Management: Not Supported 00:15:39.817 Device Self-Test: Not Supported 00:15:39.817 Directives: Not Supported 00:15:39.817 NVMe-MI: Not Supported 00:15:39.817 Virtualization Management: Not Supported 00:15:39.817 Doorbell Buffer Config: Not Supported 00:15:39.817 Get LBA Status Capability: Not Supported 00:15:39.817 Command & Feature Lockdown Capability: Not Supported 00:15:39.817 Abort Command Limit: 4 00:15:39.817 Async Event Request Limit: 4 00:15:39.817 Number of Firmware Slots: N/A 00:15:39.817 Firmware Slot 1 Read-Only: N/A 00:15:39.817 Firmware Activation Without Reset: N/A 00:15:39.817 Multiple Update Detection Support: N/A 00:15:39.817 Firmware Update Granularity: No Information Provided 00:15:39.817 Per-Namespace SMART Log: No 00:15:39.817 Asymmetric Namespace Access Log Page: Not Supported 00:15:39.817 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:39.817 Command Effects Log Page: Supported 00:15:39.817 Get Log Page Extended Data: Supported 00:15:39.817 Telemetry Log Pages: Not Supported 00:15:39.817 Persistent Event Log Pages: Not Supported 00:15:39.817 Supported Log Pages Log Page: May Support 00:15:39.817 Commands Supported & Effects Log Page: Not Supported 00:15:39.817 Feature Identifiers & Effects Log Page:May Support 00:15:39.817 NVMe-MI Commands & Effects Log Page: May Support 00:15:39.817 Data Area 4 for Telemetry Log: Not Supported 00:15:39.817 Error Log Page Entries Supported: 128 00:15:39.817 Keep Alive: Supported 00:15:39.817 Keep Alive Granularity: 10000 ms 00:15:39.817 00:15:39.817 NVM Command Set Attributes 00:15:39.817 ========================== 00:15:39.817 Submission Queue Entry Size 00:15:39.817 Max: 64 00:15:39.817 Min: 64 00:15:39.817 Completion Queue Entry Size 00:15:39.817 Max: 16 00:15:39.817 Min: 16 00:15:39.817 Number of Namespaces: 32 00:15:39.817 Compare Command: Supported 00:15:39.817 Write Uncorrectable Command: Not Supported 00:15:39.817 Dataset Management Command: Supported 00:15:39.817 Write Zeroes Command: Supported 00:15:39.817 Set Features Save Field: Not Supported 00:15:39.817 Reservations: Not Supported 00:15:39.817 Timestamp: Not Supported 00:15:39.817 Copy: Supported 00:15:39.817 Volatile Write Cache: Present 00:15:39.817 Atomic Write Unit (Normal): 1 00:15:39.817 Atomic Write Unit (PFail): 1 00:15:39.817 Atomic Compare & Write Unit: 1 00:15:39.817 Fused Compare & Write: Supported 00:15:39.817 Scatter-Gather List 00:15:39.817 SGL Command Set: Supported (Dword aligned) 00:15:39.817 SGL Keyed: Not Supported 00:15:39.817 SGL Bit Bucket Descriptor: Not Supported 00:15:39.817 SGL Metadata Pointer: Not Supported 00:15:39.817 Oversized SGL: Not Supported 00:15:39.817 SGL Metadata Address: Not Supported 00:15:39.817 SGL Offset: Not Supported 00:15:39.817 Transport SGL Data Block: Not Supported 00:15:39.817 Replay Protected Memory Block: Not Supported 00:15:39.817 00:15:39.817 Firmware Slot Information 00:15:39.817 ========================= 00:15:39.817 Active slot: 1 00:15:39.817 Slot 1 Firmware Revision: 25.01 00:15:39.817 00:15:39.817 00:15:39.817 Commands Supported and Effects 00:15:39.817 ============================== 00:15:39.817 Admin Commands 00:15:39.817 -------------- 00:15:39.817 Get Log Page (02h): Supported 00:15:39.817 Identify (06h): Supported 00:15:39.817 Abort (08h): Supported 00:15:39.817 Set Features (09h): Supported 00:15:39.817 Get Features (0Ah): Supported 00:15:39.817 Asynchronous Event Request (0Ch): Supported 00:15:39.817 Keep Alive (18h): Supported 00:15:39.817 I/O Commands 00:15:39.817 ------------ 00:15:39.817 Flush (00h): Supported LBA-Change 00:15:39.817 Write (01h): Supported LBA-Change 00:15:39.817 Read (02h): Supported 00:15:39.817 Compare (05h): Supported 00:15:39.817 Write Zeroes (08h): Supported LBA-Change 00:15:39.817 Dataset Management (09h): Supported LBA-Change 00:15:39.817 Copy (19h): Supported LBA-Change 00:15:39.817 00:15:39.817 Error Log 00:15:39.817 ========= 00:15:39.817 00:15:39.817 Arbitration 00:15:39.817 =========== 00:15:39.817 Arbitration Burst: 1 00:15:39.817 00:15:39.817 Power Management 00:15:39.817 ================ 00:15:39.817 Number of Power States: 1 00:15:39.817 Current Power State: Power State #0 00:15:39.817 Power State #0: 00:15:39.817 Max Power: 0.00 W 00:15:39.817 Non-Operational State: Operational 00:15:39.817 Entry Latency: Not Reported 00:15:39.817 Exit Latency: Not Reported 00:15:39.817 Relative Read Throughput: 0 00:15:39.817 Relative Read Latency: 0 00:15:39.817 Relative Write Throughput: 0 00:15:39.817 Relative Write Latency: 0 00:15:39.817 Idle Power: Not Reported 00:15:39.817 Active Power: Not Reported 00:15:39.817 Non-Operational Permissive Mode: Not Supported 00:15:39.817 00:15:39.817 Health Information 00:15:39.817 ================== 00:15:39.817 Critical Warnings: 00:15:39.817 Available Spare Space: OK 00:15:39.817 Temperature: OK 00:15:39.817 Device Reliability: OK 00:15:39.817 Read Only: No 00:15:39.817 Volatile Memory Backup: OK 00:15:39.817 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:39.817 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:39.817 Available Spare: 0% 00:15:39.817 Available Sp[2024-11-26 07:25:07.788075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:39.817 [2024-11-26 07:25:07.795955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:39.817 [2024-11-26 07:25:07.795983] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:39.817 [2024-11-26 07:25:07.795992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.817 [2024-11-26 07:25:07.795997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.817 [2024-11-26 07:25:07.796003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.817 [2024-11-26 07:25:07.796008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.817 [2024-11-26 07:25:07.796058] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:39.817 [2024-11-26 07:25:07.796068] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:39.817 [2024-11-26 07:25:07.797056] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.817 [2024-11-26 07:25:07.797101] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:39.818 [2024-11-26 07:25:07.797107] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:39.818 [2024-11-26 07:25:07.798061] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:39.818 [2024-11-26 07:25:07.798072] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:39.818 [2024-11-26 07:25:07.798118] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:39.818 [2024-11-26 07:25:07.799100] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:39.818 are Threshold: 0% 00:15:39.818 Life Percentage Used: 0% 00:15:39.818 Data Units Read: 0 00:15:39.818 Data Units Written: 0 00:15:39.818 Host Read Commands: 0 00:15:39.818 Host Write Commands: 0 00:15:39.818 Controller Busy Time: 0 minutes 00:15:39.818 Power Cycles: 0 00:15:39.818 Power On Hours: 0 hours 00:15:39.818 Unsafe Shutdowns: 0 00:15:39.818 Unrecoverable Media Errors: 0 00:15:39.818 Lifetime Error Log Entries: 0 00:15:39.818 Warning Temperature Time: 0 minutes 00:15:39.818 Critical Temperature Time: 0 minutes 00:15:39.818 00:15:39.818 Number of Queues 00:15:39.818 ================ 00:15:39.818 Number of I/O Submission Queues: 127 00:15:39.818 Number of I/O Completion Queues: 127 00:15:39.818 00:15:39.818 Active Namespaces 00:15:39.818 ================= 00:15:39.818 Namespace ID:1 00:15:39.818 Error Recovery Timeout: Unlimited 00:15:39.818 Command Set Identifier: NVM (00h) 00:15:39.818 Deallocate: Supported 00:15:39.818 Deallocated/Unwritten Error: Not Supported 00:15:39.818 Deallocated Read Value: Unknown 00:15:39.818 Deallocate in Write Zeroes: Not Supported 00:15:39.818 Deallocated Guard Field: 0xFFFF 00:15:39.818 Flush: Supported 00:15:39.818 Reservation: Supported 00:15:39.818 Namespace Sharing Capabilities: Multiple Controllers 00:15:39.818 Size (in LBAs): 131072 (0GiB) 00:15:39.818 Capacity (in LBAs): 131072 (0GiB) 00:15:39.818 Utilization (in LBAs): 131072 (0GiB) 00:15:39.818 NGUID: 5291EC7E8C544B18B92E41F92216EA15 00:15:39.818 UUID: 5291ec7e-8c54-4b18-b92e-41f92216ea15 00:15:39.818 Thin Provisioning: Not Supported 00:15:39.818 Per-NS Atomic Units: Yes 00:15:39.818 Atomic Boundary Size (Normal): 0 00:15:39.818 Atomic Boundary Size (PFail): 0 00:15:39.818 Atomic Boundary Offset: 0 00:15:39.818 Maximum Single Source Range Length: 65535 00:15:39.818 Maximum Copy Length: 65535 00:15:39.818 Maximum Source Range Count: 1 00:15:39.818 NGUID/EUI64 Never Reused: No 00:15:39.818 Namespace Write Protected: No 00:15:39.818 Number of LBA Formats: 1 00:15:39.818 Current LBA Format: LBA Format #00 00:15:39.818 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:39.818 00:15:39.818 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:40.076 [2024-11-26 07:25:08.032530] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:45.338 Initializing NVMe Controllers 00:15:45.338 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:45.338 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:45.338 Initialization complete. Launching workers. 00:15:45.338 ======================================================== 00:15:45.338 Latency(us) 00:15:45.338 Device Information : IOPS MiB/s Average min max 00:15:45.338 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39950.03 156.05 3203.60 955.19 8601.21 00:15:45.338 ======================================================== 00:15:45.338 Total : 39950.03 156.05 3203.60 955.19 8601.21 00:15:45.338 00:15:45.338 [2024-11-26 07:25:13.138207] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:45.338 07:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:45.338 [2024-11-26 07:25:13.376906] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.599 Initializing NVMe Controllers 00:15:50.599 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:50.599 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:50.599 Initialization complete. Launching workers. 00:15:50.599 ======================================================== 00:15:50.599 Latency(us) 00:15:50.599 Device Information : IOPS MiB/s Average min max 00:15:50.599 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39936.20 156.00 3206.07 982.43 10265.18 00:15:50.599 ======================================================== 00:15:50.599 Total : 39936.20 156.00 3206.07 982.43 10265.18 00:15:50.599 00:15:50.599 [2024-11-26 07:25:18.400204] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:50.599 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:50.599 [2024-11-26 07:25:18.606520] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.863 [2024-11-26 07:25:23.742042] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.863 Initializing NVMe Controllers 00:15:55.863 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:55.863 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:55.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:55.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:55.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:55.863 Initialization complete. Launching workers. 00:15:55.863 Starting thread on core 2 00:15:55.863 Starting thread on core 3 00:15:55.863 Starting thread on core 1 00:15:55.863 07:25:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:56.125 [2024-11-26 07:25:24.040905] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:59.410 [2024-11-26 07:25:27.123756] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:59.410 Initializing NVMe Controllers 00:15:59.410 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.410 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.410 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:59.410 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:59.410 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:59.410 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:59.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:59.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:59.410 Initialization complete. Launching workers. 00:15:59.410 Starting thread on core 1 with urgent priority queue 00:15:59.410 Starting thread on core 2 with urgent priority queue 00:15:59.410 Starting thread on core 3 with urgent priority queue 00:15:59.410 Starting thread on core 0 with urgent priority queue 00:15:59.410 SPDK bdev Controller (SPDK2 ) core 0: 8741.00 IO/s 11.44 secs/100000 ios 00:15:59.410 SPDK bdev Controller (SPDK2 ) core 1: 8059.00 IO/s 12.41 secs/100000 ios 00:15:59.410 SPDK bdev Controller (SPDK2 ) core 2: 7639.67 IO/s 13.09 secs/100000 ios 00:15:59.410 SPDK bdev Controller (SPDK2 ) core 3: 9722.67 IO/s 10.29 secs/100000 ios 00:15:59.410 ======================================================== 00:15:59.410 00:15:59.410 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:59.410 [2024-11-26 07:25:27.408851] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:59.410 Initializing NVMe Controllers 00:15:59.410 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.410 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:59.410 Namespace ID: 1 size: 0GB 00:15:59.410 Initialization complete. 00:15:59.410 INFO: using host memory buffer for IO 00:15:59.410 Hello world! 00:15:59.410 [2024-11-26 07:25:27.418909] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:59.410 07:25:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:59.668 [2024-11-26 07:25:27.703847] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:01.043 Initializing NVMe Controllers 00:16:01.043 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:01.043 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:01.043 Initialization complete. Launching workers. 00:16:01.043 submit (in ns) avg, min, max = 6724.4, 3287.8, 4000123.5 00:16:01.043 complete (in ns) avg, min, max = 20417.2, 1807.8, 4004291.3 00:16:01.043 00:16:01.043 Submit histogram 00:16:01.043 ================ 00:16:01.043 Range in us Cumulative Count 00:16:01.043 3.283 - 3.297: 0.0248% ( 4) 00:16:01.043 3.297 - 3.311: 0.0806% ( 9) 00:16:01.043 3.311 - 3.325: 0.2417% ( 26) 00:16:01.043 3.325 - 3.339: 0.7250% ( 78) 00:16:01.043 3.339 - 3.353: 1.5244% ( 129) 00:16:01.043 3.353 - 3.367: 4.3255% ( 452) 00:16:01.043 3.367 - 3.381: 9.5123% ( 837) 00:16:01.043 3.381 - 3.395: 16.0996% ( 1063) 00:16:01.043 3.395 - 3.409: 22.4577% ( 1026) 00:16:01.043 3.409 - 3.423: 28.5431% ( 982) 00:16:01.043 3.423 - 3.437: 34.3310% ( 934) 00:16:01.043 3.437 - 3.450: 39.4063% ( 819) 00:16:01.043 3.450 - 3.464: 45.3864% ( 965) 00:16:01.043 3.464 - 3.478: 49.5321% ( 669) 00:16:01.043 3.478 - 3.492: 53.3804% ( 621) 00:16:01.043 3.492 - 3.506: 57.4022% ( 649) 00:16:01.043 3.506 - 3.520: 63.8285% ( 1037) 00:16:01.043 3.520 - 3.534: 70.0440% ( 1003) 00:16:01.043 3.534 - 3.548: 74.3013% ( 687) 00:16:01.043 3.548 - 3.562: 79.1349% ( 780) 00:16:01.043 3.562 - 3.590: 85.4682% ( 1022) 00:16:01.043 3.590 - 3.617: 87.3830% ( 309) 00:16:01.043 3.617 - 3.645: 88.1143% ( 118) 00:16:01.043 3.645 - 3.673: 89.2049% ( 176) 00:16:01.043 3.673 - 3.701: 91.0392% ( 296) 00:16:01.043 3.701 - 3.729: 92.8177% ( 287) 00:16:01.043 3.729 - 3.757: 94.4661% ( 266) 00:16:01.043 3.757 - 3.784: 96.1765% ( 276) 00:16:01.043 3.784 - 3.812: 97.6080% ( 231) 00:16:01.043 3.812 - 3.840: 98.5809% ( 157) 00:16:01.043 3.840 - 3.868: 99.1014% ( 84) 00:16:01.043 3.868 - 3.896: 99.4175% ( 51) 00:16:01.043 3.896 - 3.923: 99.5786% ( 26) 00:16:01.043 3.923 - 3.951: 99.6344% ( 9) 00:16:01.043 3.951 - 3.979: 99.6406% ( 1) 00:16:01.043 4.063 - 4.090: 99.6468% ( 1) 00:16:01.043 4.174 - 4.202: 99.6530% ( 1) 00:16:01.043 4.202 - 4.230: 99.6592% ( 1) 00:16:01.043 4.452 - 4.480: 99.6654% ( 1) 00:16:01.043 5.092 - 5.120: 99.6716% ( 1) 00:16:01.043 5.231 - 5.259: 99.6778% ( 1) 00:16:01.043 5.315 - 5.343: 99.6840% ( 1) 00:16:01.043 5.426 - 5.454: 99.6902% ( 1) 00:16:01.043 5.537 - 5.565: 99.7087% ( 3) 00:16:01.043 5.593 - 5.621: 99.7149% ( 1) 00:16:01.043 5.621 - 5.649: 99.7273% ( 2) 00:16:01.043 5.649 - 5.677: 99.7459% ( 3) 00:16:01.043 5.677 - 5.704: 99.7521% ( 1) 00:16:01.043 5.704 - 5.732: 99.7583% ( 1) 00:16:01.043 5.732 - 5.760: 99.7769% ( 3) 00:16:01.043 5.760 - 5.788: 99.7831% ( 1) 00:16:01.043 5.927 - 5.955: 99.7893% ( 1) 00:16:01.043 6.038 - 6.066: 99.8017% ( 2) 00:16:01.043 6.150 - 6.177: 99.8079% ( 1) 00:16:01.043 6.177 - 6.205: 99.8265% ( 3) 00:16:01.043 6.261 - 6.289: 99.8327% ( 1) 00:16:01.043 6.372 - 6.400: 99.8389% ( 1) 00:16:01.043 6.539 - 6.567: 99.8451% ( 1) 00:16:01.043 6.650 - 6.678: 99.8575% ( 2) 00:16:01.043 6.901 - 6.929: 99.8637% ( 1) 00:16:01.043 6.929 - 6.957: 99.8823% ( 3) 00:16:01.043 7.040 - 7.068: 99.8885% ( 1) 00:16:01.043 7.235 - 7.290: 99.8947% ( 1) 00:16:01.043 7.402 - 7.457: 99.9008% ( 1) 00:16:01.043 7.680 - 7.736: 99.9070% ( 1) 00:16:01.043 8.181 - 8.237: 99.9132% ( 1) 00:16:01.043 9.071 - 9.127: 99.9194% ( 1) 00:16:01.043 3989.148 - 4017.642: 100.0000% ( 13) 00:16:01.043 00:16:01.043 Complete histogram 00:16:01.043 ================== 00:16:01.043 Range in us Cumulative Count 00:16:01.043 1.795 - 1.809: 0.0186% ( 3) 00:16:01.043 1.809 - 1.823: 8.5456% ( 1376) 00:16:01.043 1.823 - 1.837: 54.2976% ( 7383) 00:16:01.043 1.837 - 1.850: 70.7876% ( 2661) 00:16:01.043 1.850 - 1.864: 74.3509% ( 575) 00:16:01.043 1.864 - 1.878: 82.2891% ( 1281) 00:16:01.043 1.878 - 1.892: 93.5738% ( 1821) 00:16:01.043 1.892 - 1.906: 96.7590% ( 514) 00:16:01.043 1.906 - 1.920: 98.1409% ( 223) 00:16:01.043 1.920 - [2024-11-26 07:25:28.796986] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:01.043 1.934: 98.7110% ( 92) 00:16:01.043 1.934 - 1.948: 98.9093% ( 32) 00:16:01.043 1.948 - 1.962: 99.0643% ( 25) 00:16:01.043 1.962 - 1.976: 99.2192% ( 25) 00:16:01.043 1.976 - 1.990: 99.2440% ( 4) 00:16:01.043 1.990 - 2.003: 99.2812% ( 6) 00:16:01.043 2.003 - 2.017: 99.2935% ( 2) 00:16:01.043 2.017 - 2.031: 99.2997% ( 1) 00:16:01.043 2.073 - 2.087: 99.3059% ( 1) 00:16:01.043 2.198 - 2.212: 99.3121% ( 1) 00:16:01.043 2.226 - 2.240: 99.3183% ( 1) 00:16:01.043 2.296 - 2.310: 99.3245% ( 1) 00:16:01.043 2.351 - 2.365: 99.3307% ( 1) 00:16:01.043 2.407 - 2.421: 99.3369% ( 1) 00:16:01.043 3.492 - 3.506: 99.3431% ( 1) 00:16:01.043 3.520 - 3.534: 99.3493% ( 1) 00:16:01.043 3.562 - 3.590: 99.3555% ( 1) 00:16:01.043 3.617 - 3.645: 99.3679% ( 2) 00:16:01.043 3.645 - 3.673: 99.3803% ( 2) 00:16:01.043 3.812 - 3.840: 99.3865% ( 1) 00:16:01.043 3.840 - 3.868: 99.3927% ( 1) 00:16:01.043 3.896 - 3.923: 99.3989% ( 1) 00:16:01.043 3.923 - 3.951: 99.4051% ( 1) 00:16:01.043 3.979 - 4.007: 99.4113% ( 1) 00:16:01.043 4.035 - 4.063: 99.4237% ( 2) 00:16:01.043 4.090 - 4.118: 99.4299% ( 1) 00:16:01.043 4.174 - 4.202: 99.4423% ( 2) 00:16:01.043 4.202 - 4.230: 99.4485% ( 1) 00:16:01.043 4.230 - 4.257: 99.4547% ( 1) 00:16:01.043 4.257 - 4.285: 99.4671% ( 2) 00:16:01.043 4.313 - 4.341: 99.4733% ( 1) 00:16:01.043 4.675 - 4.703: 99.4795% ( 1) 00:16:01.043 5.092 - 5.120: 99.4857% ( 1) 00:16:01.043 5.315 - 5.343: 99.4919% ( 1) 00:16:01.043 5.510 - 5.537: 99.4980% ( 1) 00:16:01.043 6.038 - 6.066: 99.5042% ( 1) 00:16:01.044 6.094 - 6.122: 99.5104% ( 1) 00:16:01.044 6.567 - 6.595: 99.5166% ( 1) 00:16:01.044 9.350 - 9.405: 99.5228% ( 1) 00:16:01.044 15.137 - 15.249: 99.5290% ( 1) 00:16:01.044 39.624 - 39.847: 99.5352% ( 1) 00:16:01.044 3989.148 - 4017.642: 100.0000% ( 75) 00:16:01.044 00:16:01.044 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:01.044 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:01.044 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:01.044 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:01.044 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:01.044 [ 00:16:01.044 { 00:16:01.044 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:01.044 "subtype": "Discovery", 00:16:01.044 "listen_addresses": [], 00:16:01.044 "allow_any_host": true, 00:16:01.044 "hosts": [] 00:16:01.044 }, 00:16:01.044 { 00:16:01.044 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:01.044 "subtype": "NVMe", 00:16:01.044 "listen_addresses": [ 00:16:01.044 { 00:16:01.044 "trtype": "VFIOUSER", 00:16:01.044 "adrfam": "IPv4", 00:16:01.044 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:01.044 "trsvcid": "0" 00:16:01.044 } 00:16:01.044 ], 00:16:01.044 "allow_any_host": true, 00:16:01.044 "hosts": [], 00:16:01.044 "serial_number": "SPDK1", 00:16:01.044 "model_number": "SPDK bdev Controller", 00:16:01.044 "max_namespaces": 32, 00:16:01.044 "min_cntlid": 1, 00:16:01.044 "max_cntlid": 65519, 00:16:01.044 "namespaces": [ 00:16:01.044 { 00:16:01.044 "nsid": 1, 00:16:01.044 "bdev_name": "Malloc1", 00:16:01.044 "name": "Malloc1", 00:16:01.044 "nguid": "6DD90117FDBB4D70BDBAC4CCCFA0E101", 00:16:01.044 "uuid": "6dd90117-fdbb-4d70-bdba-c4cccfa0e101" 00:16:01.044 }, 00:16:01.044 { 00:16:01.044 "nsid": 2, 00:16:01.044 "bdev_name": "Malloc3", 00:16:01.044 "name": "Malloc3", 00:16:01.044 "nguid": "2E13A8F7F5F94A579F379E6DBB59984D", 00:16:01.044 "uuid": "2e13a8f7-f5f9-4a57-9f37-9e6dbb59984d" 00:16:01.044 } 00:16:01.044 ] 00:16:01.044 }, 00:16:01.044 { 00:16:01.044 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:01.044 "subtype": "NVMe", 00:16:01.044 "listen_addresses": [ 00:16:01.044 { 00:16:01.044 "trtype": "VFIOUSER", 00:16:01.044 "adrfam": "IPv4", 00:16:01.044 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:01.044 "trsvcid": "0" 00:16:01.044 } 00:16:01.044 ], 00:16:01.044 "allow_any_host": true, 00:16:01.044 "hosts": [], 00:16:01.044 "serial_number": "SPDK2", 00:16:01.044 "model_number": "SPDK bdev Controller", 00:16:01.044 "max_namespaces": 32, 00:16:01.044 "min_cntlid": 1, 00:16:01.044 "max_cntlid": 65519, 00:16:01.044 "namespaces": [ 00:16:01.044 { 00:16:01.044 "nsid": 1, 00:16:01.044 "bdev_name": "Malloc2", 00:16:01.044 "name": "Malloc2", 00:16:01.044 "nguid": "5291EC7E8C544B18B92E41F92216EA15", 00:16:01.044 "uuid": "5291ec7e-8c54-4b18-b92e-41f92216ea15" 00:16:01.044 } 00:16:01.044 ] 00:16:01.044 } 00:16:01.044 ] 00:16:01.044 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:01.044 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=708035 00:16:01.044 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:01.044 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:01.044 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:01.044 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:01.044 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:01.044 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:01.044 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:01.044 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:01.301 [2024-11-26 07:25:29.204622] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:01.301 Malloc4 00:16:01.301 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:01.558 [2024-11-26 07:25:29.454477] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:01.558 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:01.558 Asynchronous Event Request test 00:16:01.558 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:01.558 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:01.558 Registering asynchronous event callbacks... 00:16:01.558 Starting namespace attribute notice tests for all controllers... 00:16:01.558 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:01.558 aer_cb - Changed Namespace 00:16:01.558 Cleaning up... 00:16:01.816 [ 00:16:01.816 { 00:16:01.816 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:01.816 "subtype": "Discovery", 00:16:01.816 "listen_addresses": [], 00:16:01.816 "allow_any_host": true, 00:16:01.816 "hosts": [] 00:16:01.816 }, 00:16:01.816 { 00:16:01.816 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:01.816 "subtype": "NVMe", 00:16:01.816 "listen_addresses": [ 00:16:01.816 { 00:16:01.816 "trtype": "VFIOUSER", 00:16:01.816 "adrfam": "IPv4", 00:16:01.816 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:01.816 "trsvcid": "0" 00:16:01.816 } 00:16:01.816 ], 00:16:01.816 "allow_any_host": true, 00:16:01.816 "hosts": [], 00:16:01.816 "serial_number": "SPDK1", 00:16:01.816 "model_number": "SPDK bdev Controller", 00:16:01.816 "max_namespaces": 32, 00:16:01.816 "min_cntlid": 1, 00:16:01.816 "max_cntlid": 65519, 00:16:01.816 "namespaces": [ 00:16:01.816 { 00:16:01.816 "nsid": 1, 00:16:01.816 "bdev_name": "Malloc1", 00:16:01.816 "name": "Malloc1", 00:16:01.816 "nguid": "6DD90117FDBB4D70BDBAC4CCCFA0E101", 00:16:01.816 "uuid": "6dd90117-fdbb-4d70-bdba-c4cccfa0e101" 00:16:01.816 }, 00:16:01.816 { 00:16:01.816 "nsid": 2, 00:16:01.816 "bdev_name": "Malloc3", 00:16:01.816 "name": "Malloc3", 00:16:01.816 "nguid": "2E13A8F7F5F94A579F379E6DBB59984D", 00:16:01.816 "uuid": "2e13a8f7-f5f9-4a57-9f37-9e6dbb59984d" 00:16:01.816 } 00:16:01.816 ] 00:16:01.816 }, 00:16:01.816 { 00:16:01.816 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:01.816 "subtype": "NVMe", 00:16:01.816 "listen_addresses": [ 00:16:01.816 { 00:16:01.816 "trtype": "VFIOUSER", 00:16:01.816 "adrfam": "IPv4", 00:16:01.816 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:01.816 "trsvcid": "0" 00:16:01.816 } 00:16:01.816 ], 00:16:01.816 "allow_any_host": true, 00:16:01.816 "hosts": [], 00:16:01.816 "serial_number": "SPDK2", 00:16:01.816 "model_number": "SPDK bdev Controller", 00:16:01.816 "max_namespaces": 32, 00:16:01.816 "min_cntlid": 1, 00:16:01.816 "max_cntlid": 65519, 00:16:01.816 "namespaces": [ 00:16:01.816 { 00:16:01.816 "nsid": 1, 00:16:01.816 "bdev_name": "Malloc2", 00:16:01.816 "name": "Malloc2", 00:16:01.816 "nguid": "5291EC7E8C544B18B92E41F92216EA15", 00:16:01.816 "uuid": "5291ec7e-8c54-4b18-b92e-41f92216ea15" 00:16:01.816 }, 00:16:01.816 { 00:16:01.816 "nsid": 2, 00:16:01.816 "bdev_name": "Malloc4", 00:16:01.816 "name": "Malloc4", 00:16:01.816 "nguid": "22C04E9063E54AEA8362C20DED5FA1A3", 00:16:01.816 "uuid": "22c04e90-63e5-4aea-8362-c20ded5fa1a3" 00:16:01.816 } 00:16:01.816 ] 00:16:01.816 } 00:16:01.816 ] 00:16:01.816 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 708035 00:16:01.816 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:01.816 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 700366 00:16:01.816 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 700366 ']' 00:16:01.816 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 700366 00:16:01.816 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:01.816 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.816 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 700366 00:16:01.816 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.816 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.817 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 700366' 00:16:01.817 killing process with pid 700366 00:16:01.817 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 700366 00:16:01.817 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 700366 00:16:02.074 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:02.074 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:02.074 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:02.074 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:02.074 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:02.074 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=708169 00:16:02.074 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 708169' 00:16:02.074 Process pid: 708169 00:16:02.074 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:02.075 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:02.075 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 708169 00:16:02.075 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 708169 ']' 00:16:02.075 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.075 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.075 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.075 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.075 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:02.075 [2024-11-26 07:25:30.024805] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:02.075 [2024-11-26 07:25:30.025737] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:16:02.075 [2024-11-26 07:25:30.025779] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.075 [2024-11-26 07:25:30.089855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:02.075 [2024-11-26 07:25:30.129114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.075 [2024-11-26 07:25:30.129155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.075 [2024-11-26 07:25:30.129162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.075 [2024-11-26 07:25:30.129168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.075 [2024-11-26 07:25:30.129174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.075 [2024-11-26 07:25:30.130663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.075 [2024-11-26 07:25:30.130762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.075 [2024-11-26 07:25:30.130825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:02.075 [2024-11-26 07:25:30.130827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.333 [2024-11-26 07:25:30.198538] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:02.333 [2024-11-26 07:25:30.198649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:02.333 [2024-11-26 07:25:30.198854] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:02.333 [2024-11-26 07:25:30.199126] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:02.333 [2024-11-26 07:25:30.199306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:02.333 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.333 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:02.333 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:03.268 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:03.527 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:03.527 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:03.527 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:03.527 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:03.527 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:03.784 Malloc1 00:16:03.784 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:03.784 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:04.042 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:04.300 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:04.300 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:04.300 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:04.558 Malloc2 00:16:04.558 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:04.558 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:04.815 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 708169 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 708169 ']' 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 708169 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 708169 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 708169' 00:16:05.074 killing process with pid 708169 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 708169 00:16:05.074 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 708169 00:16:05.331 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:05.331 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:05.331 00:16:05.331 real 0m50.833s 00:16:05.331 user 3m16.683s 00:16:05.331 sys 0m3.362s 00:16:05.331 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.331 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:05.331 ************************************ 00:16:05.331 END TEST nvmf_vfio_user 00:16:05.331 ************************************ 00:16:05.331 07:25:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:05.331 07:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:05.331 07:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.331 07:25:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:05.331 ************************************ 00:16:05.331 START TEST nvmf_vfio_user_nvme_compliance 00:16:05.331 ************************************ 00:16:05.331 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:05.331 * Looking for test storage... 00:16:05.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:05.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.591 --rc genhtml_branch_coverage=1 00:16:05.591 --rc genhtml_function_coverage=1 00:16:05.591 --rc genhtml_legend=1 00:16:05.591 --rc geninfo_all_blocks=1 00:16:05.591 --rc geninfo_unexecuted_blocks=1 00:16:05.591 00:16:05.591 ' 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:05.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.591 --rc genhtml_branch_coverage=1 00:16:05.591 --rc genhtml_function_coverage=1 00:16:05.591 --rc genhtml_legend=1 00:16:05.591 --rc geninfo_all_blocks=1 00:16:05.591 --rc geninfo_unexecuted_blocks=1 00:16:05.591 00:16:05.591 ' 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:05.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.591 --rc genhtml_branch_coverage=1 00:16:05.591 --rc genhtml_function_coverage=1 00:16:05.591 --rc genhtml_legend=1 00:16:05.591 --rc geninfo_all_blocks=1 00:16:05.591 --rc geninfo_unexecuted_blocks=1 00:16:05.591 00:16:05.591 ' 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:05.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.591 --rc genhtml_branch_coverage=1 00:16:05.591 --rc genhtml_function_coverage=1 00:16:05.591 --rc genhtml_legend=1 00:16:05.591 --rc geninfo_all_blocks=1 00:16:05.591 --rc geninfo_unexecuted_blocks=1 00:16:05.591 00:16:05.591 ' 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.591 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:05.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=708927 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 708927' 00:16:05.592 Process pid: 708927 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 708927 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 708927 ']' 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.592 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.592 [2024-11-26 07:25:33.579570] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:16:05.592 [2024-11-26 07:25:33.579615] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.592 [2024-11-26 07:25:33.641746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:05.592 [2024-11-26 07:25:33.684982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.592 [2024-11-26 07:25:33.685016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.592 [2024-11-26 07:25:33.685023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.592 [2024-11-26 07:25:33.685030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.592 [2024-11-26 07:25:33.685035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.850 [2024-11-26 07:25:33.686467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.850 [2024-11-26 07:25:33.686488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.850 [2024-11-26 07:25:33.686490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.850 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.850 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:05.850 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.786 malloc0 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.786 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:07.043 00:16:07.043 00:16:07.043 CUnit - A unit testing framework for C - Version 2.1-3 00:16:07.043 http://cunit.sourceforge.net/ 00:16:07.043 00:16:07.043 00:16:07.043 Suite: nvme_compliance 00:16:07.043 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-26 07:25:35.042411] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.043 [2024-11-26 07:25:35.043761] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:07.043 [2024-11-26 07:25:35.043776] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:07.043 [2024-11-26 07:25:35.043782] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:07.044 [2024-11-26 07:25:35.045434] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.044 passed 00:16:07.044 Test: admin_identify_ctrlr_verify_fused ...[2024-11-26 07:25:35.123976] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.044 [2024-11-26 07:25:35.127000] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.302 passed 00:16:07.302 Test: admin_identify_ns ...[2024-11-26 07:25:35.206815] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.302 [2024-11-26 07:25:35.266961] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:07.302 [2024-11-26 07:25:35.274954] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:07.303 [2024-11-26 07:25:35.296060] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.303 passed 00:16:07.303 Test: admin_get_features_mandatory_features ...[2024-11-26 07:25:35.373989] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.303 [2024-11-26 07:25:35.377023] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.561 passed 00:16:07.561 Test: admin_get_features_optional_features ...[2024-11-26 07:25:35.458575] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.561 [2024-11-26 07:25:35.461580] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.561 passed 00:16:07.561 Test: admin_set_features_number_of_queues ...[2024-11-26 07:25:35.537460] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.561 [2024-11-26 07:25:35.642051] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.820 passed 00:16:07.820 Test: admin_get_log_page_mandatory_logs ...[2024-11-26 07:25:35.719980] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.820 [2024-11-26 07:25:35.723004] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.820 passed 00:16:07.820 Test: admin_get_log_page_with_lpo ...[2024-11-26 07:25:35.801964] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:07.820 [2024-11-26 07:25:35.871962] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:07.820 [2024-11-26 07:25:35.884015] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:07.820 passed 00:16:08.079 Test: fabric_property_get ...[2024-11-26 07:25:35.961135] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.079 [2024-11-26 07:25:35.962382] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:08.079 [2024-11-26 07:25:35.964159] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.079 passed 00:16:08.079 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-26 07:25:36.042660] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.079 [2024-11-26 07:25:36.043893] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:08.079 [2024-11-26 07:25:36.045675] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.079 passed 00:16:08.079 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-26 07:25:36.122435] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.337 [2024-11-26 07:25:36.209961] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:08.337 [2024-11-26 07:25:36.225956] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:08.337 [2024-11-26 07:25:36.231052] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.338 passed 00:16:08.338 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-26 07:25:36.305144] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.338 [2024-11-26 07:25:36.306374] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:08.338 [2024-11-26 07:25:36.308169] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.338 passed 00:16:08.338 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-26 07:25:36.386091] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.596 [2024-11-26 07:25:36.463964] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:08.596 [2024-11-26 07:25:36.487959] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:08.596 [2024-11-26 07:25:36.493043] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.596 passed 00:16:08.596 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-26 07:25:36.568203] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.596 [2024-11-26 07:25:36.569449] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:08.596 [2024-11-26 07:25:36.569476] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:08.596 [2024-11-26 07:25:36.571229] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.596 passed 00:16:08.596 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-26 07:25:36.649362] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.854 [2024-11-26 07:25:36.744954] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:08.854 [2024-11-26 07:25:36.756958] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:08.854 [2024-11-26 07:25:36.764956] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:08.854 [2024-11-26 07:25:36.772952] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:08.854 [2024-11-26 07:25:36.805055] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.854 passed 00:16:08.854 Test: admin_create_io_sq_verify_pc ...[2024-11-26 07:25:36.879194] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:08.854 [2024-11-26 07:25:36.889961] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:08.854 [2024-11-26 07:25:36.907342] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:08.854 passed 00:16:09.113 Test: admin_create_io_qp_max_qps ...[2024-11-26 07:25:36.985918] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:10.046 [2024-11-26 07:25:38.104957] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:10.614 [2024-11-26 07:25:38.483470] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:10.614 passed 00:16:10.614 Test: admin_create_io_sq_shared_cq ...[2024-11-26 07:25:38.561530] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:10.614 [2024-11-26 07:25:38.692960] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:10.871 [2024-11-26 07:25:38.730021] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:10.871 passed 00:16:10.871 00:16:10.871 Run Summary: Type Total Ran Passed Failed Inactive 00:16:10.871 suites 1 1 n/a 0 0 00:16:10.871 tests 18 18 18 0 0 00:16:10.871 asserts 360 360 360 0 n/a 00:16:10.871 00:16:10.871 Elapsed time = 1.516 seconds 00:16:10.871 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 708927 00:16:10.871 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 708927 ']' 00:16:10.871 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 708927 00:16:10.871 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:10.871 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.871 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 708927 00:16:10.871 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.871 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.871 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 708927' 00:16:10.871 killing process with pid 708927 00:16:10.871 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 708927 00:16:10.871 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 708927 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:11.129 00:16:11.129 real 0m5.669s 00:16:11.129 user 0m15.887s 00:16:11.129 sys 0m0.489s 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:11.129 ************************************ 00:16:11.129 END TEST nvmf_vfio_user_nvme_compliance 00:16:11.129 ************************************ 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:11.129 ************************************ 00:16:11.129 START TEST nvmf_vfio_user_fuzz 00:16:11.129 ************************************ 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:11.129 * Looking for test storage... 00:16:11.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:11.129 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.395 --rc genhtml_branch_coverage=1 00:16:11.395 --rc genhtml_function_coverage=1 00:16:11.395 --rc genhtml_legend=1 00:16:11.395 --rc geninfo_all_blocks=1 00:16:11.395 --rc geninfo_unexecuted_blocks=1 00:16:11.395 00:16:11.395 ' 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.395 --rc genhtml_branch_coverage=1 00:16:11.395 --rc genhtml_function_coverage=1 00:16:11.395 --rc genhtml_legend=1 00:16:11.395 --rc geninfo_all_blocks=1 00:16:11.395 --rc geninfo_unexecuted_blocks=1 00:16:11.395 00:16:11.395 ' 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.395 --rc genhtml_branch_coverage=1 00:16:11.395 --rc genhtml_function_coverage=1 00:16:11.395 --rc genhtml_legend=1 00:16:11.395 --rc geninfo_all_blocks=1 00:16:11.395 --rc geninfo_unexecuted_blocks=1 00:16:11.395 00:16:11.395 ' 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.395 --rc genhtml_branch_coverage=1 00:16:11.395 --rc genhtml_function_coverage=1 00:16:11.395 --rc genhtml_legend=1 00:16:11.395 --rc geninfo_all_blocks=1 00:16:11.395 --rc geninfo_unexecuted_blocks=1 00:16:11.395 00:16:11.395 ' 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.395 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:11.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=709916 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 709916' 00:16:11.396 Process pid: 709916 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 709916 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 709916 ']' 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.396 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:11.655 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.655 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:11.655 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:12.587 malloc0 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:12.587 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:44.682 Fuzzing completed. Shutting down the fuzz application 00:16:44.682 00:16:44.682 Dumping successful admin opcodes: 00:16:44.682 9, 10, 00:16:44.682 Dumping successful io opcodes: 00:16:44.682 0, 00:16:44.682 NS: 0x20000081ef00 I/O qp, Total commands completed: 1000479, total successful commands: 3915, random_seed: 1713529856 00:16:44.682 NS: 0x20000081ef00 admin qp, Total commands completed: 247280, total successful commands: 58, random_seed: 189893632 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 709916 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 709916 ']' 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 709916 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 709916 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 709916' 00:16:44.682 killing process with pid 709916 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 709916 00:16:44.682 07:26:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 709916 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:44.682 00:16:44.682 real 0m32.164s 00:16:44.682 user 0m29.481s 00:16:44.682 sys 0m31.512s 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:44.682 ************************************ 00:16:44.682 END TEST nvmf_vfio_user_fuzz 00:16:44.682 ************************************ 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:44.682 ************************************ 00:16:44.682 START TEST nvmf_auth_target 00:16:44.682 ************************************ 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:44.682 * Looking for test storage... 00:16:44.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:44.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.682 --rc genhtml_branch_coverage=1 00:16:44.682 --rc genhtml_function_coverage=1 00:16:44.682 --rc genhtml_legend=1 00:16:44.682 --rc geninfo_all_blocks=1 00:16:44.682 --rc geninfo_unexecuted_blocks=1 00:16:44.682 00:16:44.682 ' 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:44.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.682 --rc genhtml_branch_coverage=1 00:16:44.682 --rc genhtml_function_coverage=1 00:16:44.682 --rc genhtml_legend=1 00:16:44.682 --rc geninfo_all_blocks=1 00:16:44.682 --rc geninfo_unexecuted_blocks=1 00:16:44.682 00:16:44.682 ' 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:44.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.682 --rc genhtml_branch_coverage=1 00:16:44.682 --rc genhtml_function_coverage=1 00:16:44.682 --rc genhtml_legend=1 00:16:44.682 --rc geninfo_all_blocks=1 00:16:44.682 --rc geninfo_unexecuted_blocks=1 00:16:44.682 00:16:44.682 ' 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:44.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.682 --rc genhtml_branch_coverage=1 00:16:44.682 --rc genhtml_function_coverage=1 00:16:44.682 --rc genhtml_legend=1 00:16:44.682 --rc geninfo_all_blocks=1 00:16:44.682 --rc geninfo_unexecuted_blocks=1 00:16:44.682 00:16:44.682 ' 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.682 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:44.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:44.683 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.874 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.874 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:48.874 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:48.874 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:48.874 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:48.875 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:48.875 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:48.875 Found net devices under 0000:86:00.0: cvl_0_0 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:48.875 Found net devices under 0000:86:00.1: cvl_0_1 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.875 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.135 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.135 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.135 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:49.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:16:49.135 00:16:49.135 --- 10.0.0.2 ping statistics --- 00:16:49.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.135 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:16:49.135 00:16:49.135 --- 10.0.0.1 ping statistics --- 00:16:49.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.135 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=718212 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 718212 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 718212 ']' 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.135 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.136 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.136 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=718232 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d543fffce75b7adee8286f7dc8f09866aabf8f0f541f2662 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.CFl 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d543fffce75b7adee8286f7dc8f09866aabf8f0f541f2662 0 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d543fffce75b7adee8286f7dc8f09866aabf8f0f541f2662 0 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d543fffce75b7adee8286f7dc8f09866aabf8f0f541f2662 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:49.395 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.CFl 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.CFl 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.CFl 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dbc0540ea1811dd4af62c0d45daf63b2c98d0e88b55256b1c404a83a5bf1bb92 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bsY 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dbc0540ea1811dd4af62c0d45daf63b2c98d0e88b55256b1c404a83a5bf1bb92 3 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dbc0540ea1811dd4af62c0d45daf63b2c98d0e88b55256b1c404a83a5bf1bb92 3 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dbc0540ea1811dd4af62c0d45daf63b2c98d0e88b55256b1c404a83a5bf1bb92 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bsY 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bsY 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.bsY 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4b805413f67ce61b4f1cb254daf588f3 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.YJ8 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4b805413f67ce61b4f1cb254daf588f3 1 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4b805413f67ce61b4f1cb254daf588f3 1 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4b805413f67ce61b4f1cb254daf588f3 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.YJ8 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.YJ8 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.YJ8 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:49.655 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bbd29ea06cbbf77e6ad5b0f1f3a641aea796a918caff2943 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SNF 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bbd29ea06cbbf77e6ad5b0f1f3a641aea796a918caff2943 2 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bbd29ea06cbbf77e6ad5b0f1f3a641aea796a918caff2943 2 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bbd29ea06cbbf77e6ad5b0f1f3a641aea796a918caff2943 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SNF 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SNF 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.SNF 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4fdd79eec77fe17004c1b1ffa7781c8311ac8a0f63aa7db8 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ODq 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4fdd79eec77fe17004c1b1ffa7781c8311ac8a0f63aa7db8 2 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4fdd79eec77fe17004c1b1ffa7781c8311ac8a0f63aa7db8 2 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4fdd79eec77fe17004c1b1ffa7781c8311ac8a0f63aa7db8 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ODq 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ODq 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ODq 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:49.656 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2fab6814422a8cf944f114a476a8b7f7 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7sE 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2fab6814422a8cf944f114a476a8b7f7 1 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2fab6814422a8cf944f114a476a8b7f7 1 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2fab6814422a8cf944f114a476a8b7f7 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7sE 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7sE 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.7sE 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=70e22f166838021f16a23f851d24a422e520fee0936967b69fa80ee6af17bd7c 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SQ5 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 70e22f166838021f16a23f851d24a422e520fee0936967b69fa80ee6af17bd7c 3 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 70e22f166838021f16a23f851d24a422e520fee0936967b69fa80ee6af17bd7c 3 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=70e22f166838021f16a23f851d24a422e520fee0936967b69fa80ee6af17bd7c 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SQ5 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SQ5 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.SQ5 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 718212 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 718212 ']' 00:16:49.915 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.916 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.916 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.916 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.916 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 718232 /var/tmp/host.sock 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 718232 ']' 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:50.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.175 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CFl 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.CFl 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.CFl 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.bsY ]] 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bsY 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bsY 00:16:50.434 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bsY 00:16:50.693 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:50.693 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YJ8 00:16:50.693 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.693 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.693 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.693 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.YJ8 00:16:50.693 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.YJ8 00:16:50.952 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.SNF ]] 00:16:50.952 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SNF 00:16:50.952 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.952 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.952 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.952 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SNF 00:16:50.952 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SNF 00:16:51.210 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:51.210 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ODq 00:16:51.210 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.210 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.210 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.210 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ODq 00:16:51.210 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ODq 00:16:51.210 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.7sE ]] 00:16:51.210 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7sE 00:16:51.210 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.210 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.467 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.467 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7sE 00:16:51.467 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7sE 00:16:51.467 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:51.467 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SQ5 00:16:51.467 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.467 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.467 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.467 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.SQ5 00:16:51.468 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.SQ5 00:16:51.725 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:51.725 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:51.725 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.725 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.725 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:51.725 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.984 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.242 00:16:52.242 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.242 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.242 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.242 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.242 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.242 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.242 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.242 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.242 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.242 { 00:16:52.242 "cntlid": 1, 00:16:52.242 "qid": 0, 00:16:52.242 "state": "enabled", 00:16:52.242 "thread": "nvmf_tgt_poll_group_000", 00:16:52.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.242 "listen_address": { 00:16:52.242 "trtype": "TCP", 00:16:52.242 "adrfam": "IPv4", 00:16:52.242 "traddr": "10.0.0.2", 00:16:52.242 "trsvcid": "4420" 00:16:52.242 }, 00:16:52.242 "peer_address": { 00:16:52.242 "trtype": "TCP", 00:16:52.242 "adrfam": "IPv4", 00:16:52.242 "traddr": "10.0.0.1", 00:16:52.242 "trsvcid": "33954" 00:16:52.242 }, 00:16:52.242 "auth": { 00:16:52.242 "state": "completed", 00:16:52.242 "digest": "sha256", 00:16:52.242 "dhgroup": "null" 00:16:52.242 } 00:16:52.242 } 00:16:52.242 ]' 00:16:52.243 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.501 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.501 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.501 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:52.501 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.501 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.501 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.501 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.760 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:16:52.760 07:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:16:53.328 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.328 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.328 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.328 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.328 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.328 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.328 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:53.328 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.587 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.587 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.845 { 00:16:53.845 "cntlid": 3, 00:16:53.845 "qid": 0, 00:16:53.845 "state": "enabled", 00:16:53.845 "thread": "nvmf_tgt_poll_group_000", 00:16:53.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.845 "listen_address": { 00:16:53.845 "trtype": "TCP", 00:16:53.845 "adrfam": "IPv4", 00:16:53.845 "traddr": "10.0.0.2", 00:16:53.845 "trsvcid": "4420" 00:16:53.845 }, 00:16:53.845 "peer_address": { 00:16:53.845 "trtype": "TCP", 00:16:53.845 "adrfam": "IPv4", 00:16:53.845 "traddr": "10.0.0.1", 00:16:53.845 "trsvcid": "33976" 00:16:53.845 }, 00:16:53.845 "auth": { 00:16:53.845 "state": "completed", 00:16:53.845 "digest": "sha256", 00:16:53.845 "dhgroup": "null" 00:16:53.845 } 00:16:53.845 } 00:16:53.845 ]' 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.845 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.103 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:54.103 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.103 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.103 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.103 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.362 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:16:54.362 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.929 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.930 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.188 00:16:55.188 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.188 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.188 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.447 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.447 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.447 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.447 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.447 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.447 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.447 { 00:16:55.447 "cntlid": 5, 00:16:55.447 "qid": 0, 00:16:55.447 "state": "enabled", 00:16:55.447 "thread": "nvmf_tgt_poll_group_000", 00:16:55.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.447 "listen_address": { 00:16:55.447 "trtype": "TCP", 00:16:55.447 "adrfam": "IPv4", 00:16:55.447 "traddr": "10.0.0.2", 00:16:55.447 "trsvcid": "4420" 00:16:55.447 }, 00:16:55.447 "peer_address": { 00:16:55.447 "trtype": "TCP", 00:16:55.447 "adrfam": "IPv4", 00:16:55.447 "traddr": "10.0.0.1", 00:16:55.447 "trsvcid": "34006" 00:16:55.447 }, 00:16:55.447 "auth": { 00:16:55.447 "state": "completed", 00:16:55.447 "digest": "sha256", 00:16:55.447 "dhgroup": "null" 00:16:55.447 } 00:16:55.447 } 00:16:55.447 ]' 00:16:55.447 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.447 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.447 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.447 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:55.447 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.706 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.706 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.706 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.706 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:16:55.706 07:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:16:56.275 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.275 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.275 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.275 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.275 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.275 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.275 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:56.275 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.534 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.793 00:16:56.793 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.793 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.793 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.051 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.051 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.051 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.051 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.051 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.051 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.051 { 00:16:57.051 "cntlid": 7, 00:16:57.051 "qid": 0, 00:16:57.051 "state": "enabled", 00:16:57.051 "thread": "nvmf_tgt_poll_group_000", 00:16:57.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.051 "listen_address": { 00:16:57.051 "trtype": "TCP", 00:16:57.051 "adrfam": "IPv4", 00:16:57.051 "traddr": "10.0.0.2", 00:16:57.051 "trsvcid": "4420" 00:16:57.051 }, 00:16:57.051 "peer_address": { 00:16:57.051 "trtype": "TCP", 00:16:57.051 "adrfam": "IPv4", 00:16:57.051 "traddr": "10.0.0.1", 00:16:57.051 "trsvcid": "34886" 00:16:57.051 }, 00:16:57.051 "auth": { 00:16:57.051 "state": "completed", 00:16:57.051 "digest": "sha256", 00:16:57.051 "dhgroup": "null" 00:16:57.052 } 00:16:57.052 } 00:16:57.052 ]' 00:16:57.052 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.052 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.052 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.052 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:57.052 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.310 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.310 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.310 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.310 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:16:57.310 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:16:57.877 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.877 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.877 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.877 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.877 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.877 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.877 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.877 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:57.877 07:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.136 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.395 00:16:58.395 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.395 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.395 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.654 { 00:16:58.654 "cntlid": 9, 00:16:58.654 "qid": 0, 00:16:58.654 "state": "enabled", 00:16:58.654 "thread": "nvmf_tgt_poll_group_000", 00:16:58.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.654 "listen_address": { 00:16:58.654 "trtype": "TCP", 00:16:58.654 "adrfam": "IPv4", 00:16:58.654 "traddr": "10.0.0.2", 00:16:58.654 "trsvcid": "4420" 00:16:58.654 }, 00:16:58.654 "peer_address": { 00:16:58.654 "trtype": "TCP", 00:16:58.654 "adrfam": "IPv4", 00:16:58.654 "traddr": "10.0.0.1", 00:16:58.654 "trsvcid": "34906" 00:16:58.654 }, 00:16:58.654 "auth": { 00:16:58.654 "state": "completed", 00:16:58.654 "digest": "sha256", 00:16:58.654 "dhgroup": "ffdhe2048" 00:16:58.654 } 00:16:58.654 } 00:16:58.654 ]' 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.654 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.912 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:16:58.912 07:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:16:59.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:59.480 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.738 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.996 00:16:59.996 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.996 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.996 07:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.256 { 00:17:00.256 "cntlid": 11, 00:17:00.256 "qid": 0, 00:17:00.256 "state": "enabled", 00:17:00.256 "thread": "nvmf_tgt_poll_group_000", 00:17:00.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.256 "listen_address": { 00:17:00.256 "trtype": "TCP", 00:17:00.256 "adrfam": "IPv4", 00:17:00.256 "traddr": "10.0.0.2", 00:17:00.256 "trsvcid": "4420" 00:17:00.256 }, 00:17:00.256 "peer_address": { 00:17:00.256 "trtype": "TCP", 00:17:00.256 "adrfam": "IPv4", 00:17:00.256 "traddr": "10.0.0.1", 00:17:00.256 "trsvcid": "34926" 00:17:00.256 }, 00:17:00.256 "auth": { 00:17:00.256 "state": "completed", 00:17:00.256 "digest": "sha256", 00:17:00.256 "dhgroup": "ffdhe2048" 00:17:00.256 } 00:17:00.256 } 00:17:00.256 ]' 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.256 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.514 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:00.514 07:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:01.081 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.081 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.081 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.081 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.081 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.081 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.081 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.081 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.339 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.597 00:17:01.597 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.597 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.597 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.856 { 00:17:01.856 "cntlid": 13, 00:17:01.856 "qid": 0, 00:17:01.856 "state": "enabled", 00:17:01.856 "thread": "nvmf_tgt_poll_group_000", 00:17:01.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.856 "listen_address": { 00:17:01.856 "trtype": "TCP", 00:17:01.856 "adrfam": "IPv4", 00:17:01.856 "traddr": "10.0.0.2", 00:17:01.856 "trsvcid": "4420" 00:17:01.856 }, 00:17:01.856 "peer_address": { 00:17:01.856 "trtype": "TCP", 00:17:01.856 "adrfam": "IPv4", 00:17:01.856 "traddr": "10.0.0.1", 00:17:01.856 "trsvcid": "34962" 00:17:01.856 }, 00:17:01.856 "auth": { 00:17:01.856 "state": "completed", 00:17:01.856 "digest": "sha256", 00:17:01.856 "dhgroup": "ffdhe2048" 00:17:01.856 } 00:17:01.856 } 00:17:01.856 ]' 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.856 07:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.114 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:02.115 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:02.682 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.682 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.682 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.682 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.682 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.682 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.682 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:02.682 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.940 07:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.199 00:17:03.199 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.199 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.199 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.199 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.199 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.199 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.199 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.199 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.199 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.199 { 00:17:03.199 "cntlid": 15, 00:17:03.199 "qid": 0, 00:17:03.199 "state": "enabled", 00:17:03.199 "thread": "nvmf_tgt_poll_group_000", 00:17:03.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.199 "listen_address": { 00:17:03.199 "trtype": "TCP", 00:17:03.199 "adrfam": "IPv4", 00:17:03.199 "traddr": "10.0.0.2", 00:17:03.199 "trsvcid": "4420" 00:17:03.199 }, 00:17:03.199 "peer_address": { 00:17:03.199 "trtype": "TCP", 00:17:03.199 "adrfam": "IPv4", 00:17:03.199 "traddr": "10.0.0.1", 00:17:03.199 "trsvcid": "34982" 00:17:03.199 }, 00:17:03.199 "auth": { 00:17:03.199 "state": "completed", 00:17:03.199 "digest": "sha256", 00:17:03.199 "dhgroup": "ffdhe2048" 00:17:03.199 } 00:17:03.199 } 00:17:03.199 ]' 00:17:03.199 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.457 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.457 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.457 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.457 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.457 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.457 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.457 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.716 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:03.716 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:04.291 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.291 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.291 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.291 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.291 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.291 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.291 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.291 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.291 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:04.291 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.592 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.592 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.905 { 00:17:04.905 "cntlid": 17, 00:17:04.905 "qid": 0, 00:17:04.905 "state": "enabled", 00:17:04.905 "thread": "nvmf_tgt_poll_group_000", 00:17:04.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.905 "listen_address": { 00:17:04.905 "trtype": "TCP", 00:17:04.905 "adrfam": "IPv4", 00:17:04.905 "traddr": "10.0.0.2", 00:17:04.905 "trsvcid": "4420" 00:17:04.905 }, 00:17:04.905 "peer_address": { 00:17:04.905 "trtype": "TCP", 00:17:04.905 "adrfam": "IPv4", 00:17:04.905 "traddr": "10.0.0.1", 00:17:04.905 "trsvcid": "35018" 00:17:04.905 }, 00:17:04.905 "auth": { 00:17:04.905 "state": "completed", 00:17:04.905 "digest": "sha256", 00:17:04.905 "dhgroup": "ffdhe3072" 00:17:04.905 } 00:17:04.905 } 00:17:04.905 ]' 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.905 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.211 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.211 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.211 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.211 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:05.211 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:05.866 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.866 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.866 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.866 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.866 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.866 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.866 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.866 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.146 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:06.146 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.146 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.146 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:06.146 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:06.146 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.147 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.147 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.147 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.147 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.147 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.147 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.147 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.147 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.479 { 00:17:06.479 "cntlid": 19, 00:17:06.479 "qid": 0, 00:17:06.479 "state": "enabled", 00:17:06.479 "thread": "nvmf_tgt_poll_group_000", 00:17:06.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.479 "listen_address": { 00:17:06.479 "trtype": "TCP", 00:17:06.479 "adrfam": "IPv4", 00:17:06.479 "traddr": "10.0.0.2", 00:17:06.479 "trsvcid": "4420" 00:17:06.479 }, 00:17:06.479 "peer_address": { 00:17:06.479 "trtype": "TCP", 00:17:06.479 "adrfam": "IPv4", 00:17:06.479 "traddr": "10.0.0.1", 00:17:06.479 "trsvcid": "50090" 00:17:06.479 }, 00:17:06.479 "auth": { 00:17:06.479 "state": "completed", 00:17:06.479 "digest": "sha256", 00:17:06.479 "dhgroup": "ffdhe3072" 00:17:06.479 } 00:17:06.479 } 00:17:06.479 ]' 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.479 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.737 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.737 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.737 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:06.737 07:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:07.303 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.304 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.304 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.304 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.304 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.304 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.304 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.304 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.561 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:07.561 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.561 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.562 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:07.562 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.562 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.562 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.562 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.562 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.562 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.562 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.562 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.562 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.819 00:17:07.819 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.819 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.819 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.077 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.077 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.077 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.077 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.077 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.077 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.077 { 00:17:08.077 "cntlid": 21, 00:17:08.077 "qid": 0, 00:17:08.077 "state": "enabled", 00:17:08.077 "thread": "nvmf_tgt_poll_group_000", 00:17:08.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.077 "listen_address": { 00:17:08.077 "trtype": "TCP", 00:17:08.077 "adrfam": "IPv4", 00:17:08.077 "traddr": "10.0.0.2", 00:17:08.077 "trsvcid": "4420" 00:17:08.077 }, 00:17:08.077 "peer_address": { 00:17:08.077 "trtype": "TCP", 00:17:08.077 "adrfam": "IPv4", 00:17:08.077 "traddr": "10.0.0.1", 00:17:08.077 "trsvcid": "50118" 00:17:08.077 }, 00:17:08.077 "auth": { 00:17:08.077 "state": "completed", 00:17:08.077 "digest": "sha256", 00:17:08.077 "dhgroup": "ffdhe3072" 00:17:08.077 } 00:17:08.077 } 00:17:08.077 ]' 00:17:08.077 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.077 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.077 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.077 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.077 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.335 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.335 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.335 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.335 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:08.335 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:08.899 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.899 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.899 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.900 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.900 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.900 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.900 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:08.900 07:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.157 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.415 00:17:09.415 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.415 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.415 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.673 { 00:17:09.673 "cntlid": 23, 00:17:09.673 "qid": 0, 00:17:09.673 "state": "enabled", 00:17:09.673 "thread": "nvmf_tgt_poll_group_000", 00:17:09.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.673 "listen_address": { 00:17:09.673 "trtype": "TCP", 00:17:09.673 "adrfam": "IPv4", 00:17:09.673 "traddr": "10.0.0.2", 00:17:09.673 "trsvcid": "4420" 00:17:09.673 }, 00:17:09.673 "peer_address": { 00:17:09.673 "trtype": "TCP", 00:17:09.673 "adrfam": "IPv4", 00:17:09.673 "traddr": "10.0.0.1", 00:17:09.673 "trsvcid": "50150" 00:17:09.673 }, 00:17:09.673 "auth": { 00:17:09.673 "state": "completed", 00:17:09.673 "digest": "sha256", 00:17:09.673 "dhgroup": "ffdhe3072" 00:17:09.673 } 00:17:09.673 } 00:17:09.673 ]' 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.673 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.931 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:09.931 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:10.496 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.496 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.496 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.496 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.496 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.496 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.496 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.496 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:10.496 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.754 07:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.012 00:17:11.012 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.012 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.012 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.270 { 00:17:11.270 "cntlid": 25, 00:17:11.270 "qid": 0, 00:17:11.270 "state": "enabled", 00:17:11.270 "thread": "nvmf_tgt_poll_group_000", 00:17:11.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:11.270 "listen_address": { 00:17:11.270 "trtype": "TCP", 00:17:11.270 "adrfam": "IPv4", 00:17:11.270 "traddr": "10.0.0.2", 00:17:11.270 "trsvcid": "4420" 00:17:11.270 }, 00:17:11.270 "peer_address": { 00:17:11.270 "trtype": "TCP", 00:17:11.270 "adrfam": "IPv4", 00:17:11.270 "traddr": "10.0.0.1", 00:17:11.270 "trsvcid": "50170" 00:17:11.270 }, 00:17:11.270 "auth": { 00:17:11.270 "state": "completed", 00:17:11.270 "digest": "sha256", 00:17:11.270 "dhgroup": "ffdhe4096" 00:17:11.270 } 00:17:11.270 } 00:17:11.270 ]' 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.270 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.528 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:11.528 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:12.092 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.092 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.092 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.092 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.092 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.092 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.092 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:12.092 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.350 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.607 00:17:12.607 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.607 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.607 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.865 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.865 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.865 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.865 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.865 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.865 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.865 { 00:17:12.865 "cntlid": 27, 00:17:12.865 "qid": 0, 00:17:12.865 "state": "enabled", 00:17:12.865 "thread": "nvmf_tgt_poll_group_000", 00:17:12.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.865 "listen_address": { 00:17:12.865 "trtype": "TCP", 00:17:12.865 "adrfam": "IPv4", 00:17:12.865 "traddr": "10.0.0.2", 00:17:12.865 "trsvcid": "4420" 00:17:12.865 }, 00:17:12.865 "peer_address": { 00:17:12.865 "trtype": "TCP", 00:17:12.865 "adrfam": "IPv4", 00:17:12.865 "traddr": "10.0.0.1", 00:17:12.865 "trsvcid": "50188" 00:17:12.865 }, 00:17:12.865 "auth": { 00:17:12.865 "state": "completed", 00:17:12.865 "digest": "sha256", 00:17:12.865 "dhgroup": "ffdhe4096" 00:17:12.865 } 00:17:12.865 } 00:17:12.865 ]' 00:17:12.865 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.865 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.865 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.865 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.865 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.123 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.123 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.123 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.123 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:13.123 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:13.688 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.688 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.688 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.688 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.688 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.688 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.688 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:13.688 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:13.945 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:13.945 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.945 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.945 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:13.945 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.945 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.945 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.946 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.946 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.946 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.946 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.946 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.946 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.203 00:17:14.203 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.203 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.203 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.461 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.461 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.461 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.461 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.461 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.461 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.461 { 00:17:14.461 "cntlid": 29, 00:17:14.461 "qid": 0, 00:17:14.461 "state": "enabled", 00:17:14.461 "thread": "nvmf_tgt_poll_group_000", 00:17:14.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:14.461 "listen_address": { 00:17:14.461 "trtype": "TCP", 00:17:14.461 "adrfam": "IPv4", 00:17:14.461 "traddr": "10.0.0.2", 00:17:14.461 "trsvcid": "4420" 00:17:14.461 }, 00:17:14.461 "peer_address": { 00:17:14.461 "trtype": "TCP", 00:17:14.461 "adrfam": "IPv4", 00:17:14.461 "traddr": "10.0.0.1", 00:17:14.461 "trsvcid": "50214" 00:17:14.461 }, 00:17:14.461 "auth": { 00:17:14.461 "state": "completed", 00:17:14.461 "digest": "sha256", 00:17:14.461 "dhgroup": "ffdhe4096" 00:17:14.461 } 00:17:14.461 } 00:17:14.461 ]' 00:17:14.461 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.461 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.461 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.461 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:14.461 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.720 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.720 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.720 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.720 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:14.720 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:15.286 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.286 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.286 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.286 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.286 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.287 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.287 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.287 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:15.544 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.545 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.802 00:17:15.802 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.802 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.802 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.061 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.061 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.061 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.061 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.061 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.061 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.061 { 00:17:16.061 "cntlid": 31, 00:17:16.061 "qid": 0, 00:17:16.061 "state": "enabled", 00:17:16.061 "thread": "nvmf_tgt_poll_group_000", 00:17:16.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:16.061 "listen_address": { 00:17:16.061 "trtype": "TCP", 00:17:16.061 "adrfam": "IPv4", 00:17:16.061 "traddr": "10.0.0.2", 00:17:16.061 "trsvcid": "4420" 00:17:16.061 }, 00:17:16.061 "peer_address": { 00:17:16.061 "trtype": "TCP", 00:17:16.061 "adrfam": "IPv4", 00:17:16.061 "traddr": "10.0.0.1", 00:17:16.061 "trsvcid": "39258" 00:17:16.061 }, 00:17:16.061 "auth": { 00:17:16.061 "state": "completed", 00:17:16.061 "digest": "sha256", 00:17:16.061 "dhgroup": "ffdhe4096" 00:17:16.061 } 00:17:16.061 } 00:17:16.061 ]' 00:17:16.061 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.061 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.061 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.061 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:16.061 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.061 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.061 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.061 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.319 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:16.319 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:16.885 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.885 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.885 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.885 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.885 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.885 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.885 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.885 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:16.885 07:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.143 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:17.143 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.143 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:17.143 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:17.143 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:17.144 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.144 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.144 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.144 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.144 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.144 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.144 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.144 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.403 00:17:17.403 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.403 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.403 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.662 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.662 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.662 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.662 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.662 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.662 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.662 { 00:17:17.662 "cntlid": 33, 00:17:17.662 "qid": 0, 00:17:17.662 "state": "enabled", 00:17:17.662 "thread": "nvmf_tgt_poll_group_000", 00:17:17.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:17.662 "listen_address": { 00:17:17.662 "trtype": "TCP", 00:17:17.662 "adrfam": "IPv4", 00:17:17.662 "traddr": "10.0.0.2", 00:17:17.662 "trsvcid": "4420" 00:17:17.662 }, 00:17:17.662 "peer_address": { 00:17:17.662 "trtype": "TCP", 00:17:17.662 "adrfam": "IPv4", 00:17:17.662 "traddr": "10.0.0.1", 00:17:17.662 "trsvcid": "39294" 00:17:17.662 }, 00:17:17.662 "auth": { 00:17:17.662 "state": "completed", 00:17:17.662 "digest": "sha256", 00:17:17.662 "dhgroup": "ffdhe6144" 00:17:17.662 } 00:17:17.662 } 00:17:17.662 ]' 00:17:17.662 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.662 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.662 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.662 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:17.662 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.921 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.921 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.921 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.921 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:17.921 07:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:18.487 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.487 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.487 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.487 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.487 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.487 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.487 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:18.487 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.746 07:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.005 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.263 { 00:17:19.263 "cntlid": 35, 00:17:19.263 "qid": 0, 00:17:19.263 "state": "enabled", 00:17:19.263 "thread": "nvmf_tgt_poll_group_000", 00:17:19.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:19.263 "listen_address": { 00:17:19.263 "trtype": "TCP", 00:17:19.263 "adrfam": "IPv4", 00:17:19.263 "traddr": "10.0.0.2", 00:17:19.263 "trsvcid": "4420" 00:17:19.263 }, 00:17:19.263 "peer_address": { 00:17:19.263 "trtype": "TCP", 00:17:19.263 "adrfam": "IPv4", 00:17:19.263 "traddr": "10.0.0.1", 00:17:19.263 "trsvcid": "39316" 00:17:19.263 }, 00:17:19.263 "auth": { 00:17:19.263 "state": "completed", 00:17:19.263 "digest": "sha256", 00:17:19.263 "dhgroup": "ffdhe6144" 00:17:19.263 } 00:17:19.263 } 00:17:19.263 ]' 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.263 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.521 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.522 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.522 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.522 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.522 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.522 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:19.522 07:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:20.090 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.348 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.915 00:17:20.915 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.915 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.915 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.915 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.915 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.915 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.915 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.915 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.915 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.915 { 00:17:20.915 "cntlid": 37, 00:17:20.915 "qid": 0, 00:17:20.915 "state": "enabled", 00:17:20.915 "thread": "nvmf_tgt_poll_group_000", 00:17:20.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:20.915 "listen_address": { 00:17:20.915 "trtype": "TCP", 00:17:20.915 "adrfam": "IPv4", 00:17:20.915 "traddr": "10.0.0.2", 00:17:20.915 "trsvcid": "4420" 00:17:20.915 }, 00:17:20.915 "peer_address": { 00:17:20.915 "trtype": "TCP", 00:17:20.915 "adrfam": "IPv4", 00:17:20.915 "traddr": "10.0.0.1", 00:17:20.915 "trsvcid": "39344" 00:17:20.915 }, 00:17:20.915 "auth": { 00:17:20.915 "state": "completed", 00:17:20.915 "digest": "sha256", 00:17:20.915 "dhgroup": "ffdhe6144" 00:17:20.915 } 00:17:20.915 } 00:17:20.915 ]' 00:17:20.915 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.173 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.173 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.173 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:21.173 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.173 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.173 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.173 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.432 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:21.432 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:22.000 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.000 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.000 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.000 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.000 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.000 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.000 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:22.000 07:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.258 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.517 00:17:22.517 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.517 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.517 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.777 { 00:17:22.777 "cntlid": 39, 00:17:22.777 "qid": 0, 00:17:22.777 "state": "enabled", 00:17:22.777 "thread": "nvmf_tgt_poll_group_000", 00:17:22.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:22.777 "listen_address": { 00:17:22.777 "trtype": "TCP", 00:17:22.777 "adrfam": "IPv4", 00:17:22.777 "traddr": "10.0.0.2", 00:17:22.777 "trsvcid": "4420" 00:17:22.777 }, 00:17:22.777 "peer_address": { 00:17:22.777 "trtype": "TCP", 00:17:22.777 "adrfam": "IPv4", 00:17:22.777 "traddr": "10.0.0.1", 00:17:22.777 "trsvcid": "39378" 00:17:22.777 }, 00:17:22.777 "auth": { 00:17:22.777 "state": "completed", 00:17:22.777 "digest": "sha256", 00:17:22.777 "dhgroup": "ffdhe6144" 00:17:22.777 } 00:17:22.777 } 00:17:22.777 ]' 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.777 07:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:23.037 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:23.605 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.605 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.605 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.605 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.605 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.605 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.605 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.605 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.605 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.864 07:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.430 00:17:24.430 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.430 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.430 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.430 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.430 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.430 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.430 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.689 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.689 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.689 { 00:17:24.689 "cntlid": 41, 00:17:24.689 "qid": 0, 00:17:24.689 "state": "enabled", 00:17:24.689 "thread": "nvmf_tgt_poll_group_000", 00:17:24.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:24.689 "listen_address": { 00:17:24.689 "trtype": "TCP", 00:17:24.689 "adrfam": "IPv4", 00:17:24.689 "traddr": "10.0.0.2", 00:17:24.689 "trsvcid": "4420" 00:17:24.689 }, 00:17:24.689 "peer_address": { 00:17:24.689 "trtype": "TCP", 00:17:24.689 "adrfam": "IPv4", 00:17:24.689 "traddr": "10.0.0.1", 00:17:24.689 "trsvcid": "39408" 00:17:24.689 }, 00:17:24.689 "auth": { 00:17:24.689 "state": "completed", 00:17:24.689 "digest": "sha256", 00:17:24.689 "dhgroup": "ffdhe8192" 00:17:24.689 } 00:17:24.689 } 00:17:24.689 ]' 00:17:24.689 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.689 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.689 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.689 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.689 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.689 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.689 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.689 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.947 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:24.947 07:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:25.515 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.515 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.515 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.515 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.515 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.515 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.515 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:25.515 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.774 07:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.342 00:17:26.342 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.342 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.342 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.342 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.342 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.342 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.342 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.342 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.342 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.342 { 00:17:26.342 "cntlid": 43, 00:17:26.342 "qid": 0, 00:17:26.342 "state": "enabled", 00:17:26.342 "thread": "nvmf_tgt_poll_group_000", 00:17:26.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:26.342 "listen_address": { 00:17:26.342 "trtype": "TCP", 00:17:26.342 "adrfam": "IPv4", 00:17:26.342 "traddr": "10.0.0.2", 00:17:26.342 "trsvcid": "4420" 00:17:26.342 }, 00:17:26.342 "peer_address": { 00:17:26.342 "trtype": "TCP", 00:17:26.342 "adrfam": "IPv4", 00:17:26.342 "traddr": "10.0.0.1", 00:17:26.342 "trsvcid": "37094" 00:17:26.342 }, 00:17:26.342 "auth": { 00:17:26.342 "state": "completed", 00:17:26.342 "digest": "sha256", 00:17:26.342 "dhgroup": "ffdhe8192" 00:17:26.342 } 00:17:26.342 } 00:17:26.342 ]' 00:17:26.343 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.343 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.343 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.601 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.601 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.601 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.601 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.601 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.601 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:26.601 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:27.167 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.424 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.425 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.991 00:17:27.991 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.991 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.991 07:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.250 { 00:17:28.250 "cntlid": 45, 00:17:28.250 "qid": 0, 00:17:28.250 "state": "enabled", 00:17:28.250 "thread": "nvmf_tgt_poll_group_000", 00:17:28.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:28.250 "listen_address": { 00:17:28.250 "trtype": "TCP", 00:17:28.250 "adrfam": "IPv4", 00:17:28.250 "traddr": "10.0.0.2", 00:17:28.250 "trsvcid": "4420" 00:17:28.250 }, 00:17:28.250 "peer_address": { 00:17:28.250 "trtype": "TCP", 00:17:28.250 "adrfam": "IPv4", 00:17:28.250 "traddr": "10.0.0.1", 00:17:28.250 "trsvcid": "37128" 00:17:28.250 }, 00:17:28.250 "auth": { 00:17:28.250 "state": "completed", 00:17:28.250 "digest": "sha256", 00:17:28.250 "dhgroup": "ffdhe8192" 00:17:28.250 } 00:17:28.250 } 00:17:28.250 ]' 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.250 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.509 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:28.509 07:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:29.077 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.077 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.077 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.077 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.077 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.077 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.077 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.077 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.336 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.903 00:17:29.903 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.903 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.903 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.903 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.903 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.903 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.903 07:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.162 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.162 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.162 { 00:17:30.162 "cntlid": 47, 00:17:30.162 "qid": 0, 00:17:30.162 "state": "enabled", 00:17:30.162 "thread": "nvmf_tgt_poll_group_000", 00:17:30.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:30.162 "listen_address": { 00:17:30.162 "trtype": "TCP", 00:17:30.162 "adrfam": "IPv4", 00:17:30.162 "traddr": "10.0.0.2", 00:17:30.162 "trsvcid": "4420" 00:17:30.162 }, 00:17:30.162 "peer_address": { 00:17:30.162 "trtype": "TCP", 00:17:30.162 "adrfam": "IPv4", 00:17:30.162 "traddr": "10.0.0.1", 00:17:30.162 "trsvcid": "37154" 00:17:30.162 }, 00:17:30.162 "auth": { 00:17:30.162 "state": "completed", 00:17:30.162 "digest": "sha256", 00:17:30.162 "dhgroup": "ffdhe8192" 00:17:30.162 } 00:17:30.162 } 00:17:30.162 ]' 00:17:30.162 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.162 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.162 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.162 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.162 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.162 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.162 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.162 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.421 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:30.421 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:30.990 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.990 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.990 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.990 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.990 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.990 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:30.990 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.990 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.990 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:30.990 07:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.250 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.509 00:17:31.509 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.509 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.509 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.509 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.509 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.509 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.509 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.509 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.509 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.509 { 00:17:31.509 "cntlid": 49, 00:17:31.509 "qid": 0, 00:17:31.509 "state": "enabled", 00:17:31.509 "thread": "nvmf_tgt_poll_group_000", 00:17:31.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:31.509 "listen_address": { 00:17:31.509 "trtype": "TCP", 00:17:31.509 "adrfam": "IPv4", 00:17:31.509 "traddr": "10.0.0.2", 00:17:31.509 "trsvcid": "4420" 00:17:31.509 }, 00:17:31.509 "peer_address": { 00:17:31.509 "trtype": "TCP", 00:17:31.509 "adrfam": "IPv4", 00:17:31.509 "traddr": "10.0.0.1", 00:17:31.509 "trsvcid": "37178" 00:17:31.509 }, 00:17:31.509 "auth": { 00:17:31.509 "state": "completed", 00:17:31.509 "digest": "sha384", 00:17:31.509 "dhgroup": "null" 00:17:31.509 } 00:17:31.509 } 00:17:31.509 ]' 00:17:31.509 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.768 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.768 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.768 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:31.768 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.768 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.768 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.768 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.027 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:32.027 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:32.593 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.593 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.593 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.593 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.593 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.593 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.593 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:32.593 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:32.593 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.851 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.851 07:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.108 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.108 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.108 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.108 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.108 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.108 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.108 { 00:17:33.108 "cntlid": 51, 00:17:33.108 "qid": 0, 00:17:33.108 "state": "enabled", 00:17:33.108 "thread": "nvmf_tgt_poll_group_000", 00:17:33.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:33.108 "listen_address": { 00:17:33.108 "trtype": "TCP", 00:17:33.108 "adrfam": "IPv4", 00:17:33.108 "traddr": "10.0.0.2", 00:17:33.108 "trsvcid": "4420" 00:17:33.108 }, 00:17:33.108 "peer_address": { 00:17:33.108 "trtype": "TCP", 00:17:33.108 "adrfam": "IPv4", 00:17:33.108 "traddr": "10.0.0.1", 00:17:33.108 "trsvcid": "37204" 00:17:33.108 }, 00:17:33.108 "auth": { 00:17:33.108 "state": "completed", 00:17:33.108 "digest": "sha384", 00:17:33.108 "dhgroup": "null" 00:17:33.108 } 00:17:33.108 } 00:17:33.108 ]' 00:17:33.108 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.109 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.109 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.365 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:33.366 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.366 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.366 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.366 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.623 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:33.623 07:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.187 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.445 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.445 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.445 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.445 00:17:34.445 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.445 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.445 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.703 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.703 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.703 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.703 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.703 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.703 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.703 { 00:17:34.703 "cntlid": 53, 00:17:34.703 "qid": 0, 00:17:34.703 "state": "enabled", 00:17:34.703 "thread": "nvmf_tgt_poll_group_000", 00:17:34.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:34.703 "listen_address": { 00:17:34.703 "trtype": "TCP", 00:17:34.703 "adrfam": "IPv4", 00:17:34.703 "traddr": "10.0.0.2", 00:17:34.703 "trsvcid": "4420" 00:17:34.703 }, 00:17:34.703 "peer_address": { 00:17:34.703 "trtype": "TCP", 00:17:34.703 "adrfam": "IPv4", 00:17:34.703 "traddr": "10.0.0.1", 00:17:34.703 "trsvcid": "37224" 00:17:34.703 }, 00:17:34.703 "auth": { 00:17:34.703 "state": "completed", 00:17:34.703 "digest": "sha384", 00:17:34.703 "dhgroup": "null" 00:17:34.703 } 00:17:34.703 } 00:17:34.703 ]' 00:17:34.703 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.703 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.703 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.963 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:34.963 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.963 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.963 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.963 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.221 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:35.222 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.790 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.359 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.359 { 00:17:36.359 "cntlid": 55, 00:17:36.359 "qid": 0, 00:17:36.359 "state": "enabled", 00:17:36.359 "thread": "nvmf_tgt_poll_group_000", 00:17:36.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:36.359 "listen_address": { 00:17:36.359 "trtype": "TCP", 00:17:36.359 "adrfam": "IPv4", 00:17:36.359 "traddr": "10.0.0.2", 00:17:36.359 "trsvcid": "4420" 00:17:36.359 }, 00:17:36.359 "peer_address": { 00:17:36.359 "trtype": "TCP", 00:17:36.359 "adrfam": "IPv4", 00:17:36.359 "traddr": "10.0.0.1", 00:17:36.359 "trsvcid": "38188" 00:17:36.359 }, 00:17:36.359 "auth": { 00:17:36.359 "state": "completed", 00:17:36.359 "digest": "sha384", 00:17:36.359 "dhgroup": "null" 00:17:36.359 } 00:17:36.359 } 00:17:36.359 ]' 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.359 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.618 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:36.618 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.618 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.618 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.619 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.877 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:36.877 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.444 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.445 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.445 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.445 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.445 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.445 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.703 00:17:37.961 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.961 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.961 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.961 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.961 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.961 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.961 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.961 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.961 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.961 { 00:17:37.961 "cntlid": 57, 00:17:37.961 "qid": 0, 00:17:37.961 "state": "enabled", 00:17:37.961 "thread": "nvmf_tgt_poll_group_000", 00:17:37.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:37.961 "listen_address": { 00:17:37.961 "trtype": "TCP", 00:17:37.961 "adrfam": "IPv4", 00:17:37.961 "traddr": "10.0.0.2", 00:17:37.961 "trsvcid": "4420" 00:17:37.961 }, 00:17:37.961 "peer_address": { 00:17:37.961 "trtype": "TCP", 00:17:37.961 "adrfam": "IPv4", 00:17:37.961 "traddr": "10.0.0.1", 00:17:37.961 "trsvcid": "38202" 00:17:37.961 }, 00:17:37.961 "auth": { 00:17:37.961 "state": "completed", 00:17:37.961 "digest": "sha384", 00:17:37.961 "dhgroup": "ffdhe2048" 00:17:37.961 } 00:17:37.961 } 00:17:37.961 ]' 00:17:37.961 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.220 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.220 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.220 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.220 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.220 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.220 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.220 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.478 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:38.478 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:39.043 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.044 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.044 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.044 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.044 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.044 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.044 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.044 07:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.301 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:39.301 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.302 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.302 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:39.302 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.302 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.302 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.302 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.302 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.302 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.302 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.302 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.302 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.559 00:17:39.559 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.559 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.559 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.559 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.559 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.559 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.559 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.559 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.559 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.559 { 00:17:39.559 "cntlid": 59, 00:17:39.559 "qid": 0, 00:17:39.559 "state": "enabled", 00:17:39.559 "thread": "nvmf_tgt_poll_group_000", 00:17:39.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:39.559 "listen_address": { 00:17:39.559 "trtype": "TCP", 00:17:39.559 "adrfam": "IPv4", 00:17:39.559 "traddr": "10.0.0.2", 00:17:39.559 "trsvcid": "4420" 00:17:39.559 }, 00:17:39.559 "peer_address": { 00:17:39.559 "trtype": "TCP", 00:17:39.559 "adrfam": "IPv4", 00:17:39.559 "traddr": "10.0.0.1", 00:17:39.559 "trsvcid": "38236" 00:17:39.559 }, 00:17:39.559 "auth": { 00:17:39.559 "state": "completed", 00:17:39.559 "digest": "sha384", 00:17:39.559 "dhgroup": "ffdhe2048" 00:17:39.559 } 00:17:39.559 } 00:17:39.559 ]' 00:17:39.559 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.818 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.818 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.818 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.818 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.818 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.818 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.818 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.076 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:40.076 07:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.642 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.643 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.643 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.900 00:17:40.900 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.900 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.901 07:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.158 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.158 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.158 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.158 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.158 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.158 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.158 { 00:17:41.158 "cntlid": 61, 00:17:41.158 "qid": 0, 00:17:41.158 "state": "enabled", 00:17:41.158 "thread": "nvmf_tgt_poll_group_000", 00:17:41.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:41.158 "listen_address": { 00:17:41.158 "trtype": "TCP", 00:17:41.158 "adrfam": "IPv4", 00:17:41.158 "traddr": "10.0.0.2", 00:17:41.158 "trsvcid": "4420" 00:17:41.158 }, 00:17:41.158 "peer_address": { 00:17:41.158 "trtype": "TCP", 00:17:41.158 "adrfam": "IPv4", 00:17:41.158 "traddr": "10.0.0.1", 00:17:41.158 "trsvcid": "38268" 00:17:41.158 }, 00:17:41.158 "auth": { 00:17:41.158 "state": "completed", 00:17:41.158 "digest": "sha384", 00:17:41.159 "dhgroup": "ffdhe2048" 00:17:41.159 } 00:17:41.159 } 00:17:41.159 ]' 00:17:41.159 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.159 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.159 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.415 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.415 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.415 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.415 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.415 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.673 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:41.673 07:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.238 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.495 00:17:42.495 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.495 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.495 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.752 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.752 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.752 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.752 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.752 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.752 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.752 { 00:17:42.752 "cntlid": 63, 00:17:42.752 "qid": 0, 00:17:42.752 "state": "enabled", 00:17:42.752 "thread": "nvmf_tgt_poll_group_000", 00:17:42.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:42.752 "listen_address": { 00:17:42.752 "trtype": "TCP", 00:17:42.752 "adrfam": "IPv4", 00:17:42.752 "traddr": "10.0.0.2", 00:17:42.752 "trsvcid": "4420" 00:17:42.752 }, 00:17:42.752 "peer_address": { 00:17:42.752 "trtype": "TCP", 00:17:42.752 "adrfam": "IPv4", 00:17:42.752 "traddr": "10.0.0.1", 00:17:42.752 "trsvcid": "38296" 00:17:42.752 }, 00:17:42.752 "auth": { 00:17:42.752 "state": "completed", 00:17:42.752 "digest": "sha384", 00:17:42.752 "dhgroup": "ffdhe2048" 00:17:42.752 } 00:17:42.752 } 00:17:42.752 ]' 00:17:42.752 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.752 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.752 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.010 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.010 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.010 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.010 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.010 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.267 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:43.268 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.833 07:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.113 00:17:44.113 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.113 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.113 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.371 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.371 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.371 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.371 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.371 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.371 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.371 { 00:17:44.371 "cntlid": 65, 00:17:44.371 "qid": 0, 00:17:44.371 "state": "enabled", 00:17:44.371 "thread": "nvmf_tgt_poll_group_000", 00:17:44.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:44.371 "listen_address": { 00:17:44.371 "trtype": "TCP", 00:17:44.371 "adrfam": "IPv4", 00:17:44.371 "traddr": "10.0.0.2", 00:17:44.371 "trsvcid": "4420" 00:17:44.371 }, 00:17:44.371 "peer_address": { 00:17:44.371 "trtype": "TCP", 00:17:44.371 "adrfam": "IPv4", 00:17:44.371 "traddr": "10.0.0.1", 00:17:44.371 "trsvcid": "38314" 00:17:44.371 }, 00:17:44.371 "auth": { 00:17:44.371 "state": "completed", 00:17:44.371 "digest": "sha384", 00:17:44.371 "dhgroup": "ffdhe3072" 00:17:44.371 } 00:17:44.371 } 00:17:44.371 ]' 00:17:44.371 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.371 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.371 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.371 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.371 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.628 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.628 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.628 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.628 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:44.628 07:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:45.196 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.196 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.196 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.196 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.196 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.196 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.196 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:45.196 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.454 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.714 00:17:45.714 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.714 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.714 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.973 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.973 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.973 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.973 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.973 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.973 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.973 { 00:17:45.973 "cntlid": 67, 00:17:45.973 "qid": 0, 00:17:45.973 "state": "enabled", 00:17:45.973 "thread": "nvmf_tgt_poll_group_000", 00:17:45.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:45.973 "listen_address": { 00:17:45.973 "trtype": "TCP", 00:17:45.973 "adrfam": "IPv4", 00:17:45.973 "traddr": "10.0.0.2", 00:17:45.973 "trsvcid": "4420" 00:17:45.973 }, 00:17:45.973 "peer_address": { 00:17:45.973 "trtype": "TCP", 00:17:45.973 "adrfam": "IPv4", 00:17:45.973 "traddr": "10.0.0.1", 00:17:45.973 "trsvcid": "38334" 00:17:45.973 }, 00:17:45.973 "auth": { 00:17:45.973 "state": "completed", 00:17:45.973 "digest": "sha384", 00:17:45.973 "dhgroup": "ffdhe3072" 00:17:45.973 } 00:17:45.973 } 00:17:45.973 ]' 00:17:45.973 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.973 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.973 07:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.973 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:45.973 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.232 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.232 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.232 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.232 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:46.232 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:46.798 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.056 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.056 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.056 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.056 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.056 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.056 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.056 07:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.056 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.057 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.315 00:17:47.315 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.315 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.315 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.574 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.574 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.574 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.574 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.574 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.574 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.574 { 00:17:47.574 "cntlid": 69, 00:17:47.574 "qid": 0, 00:17:47.574 "state": "enabled", 00:17:47.574 "thread": "nvmf_tgt_poll_group_000", 00:17:47.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:47.574 "listen_address": { 00:17:47.574 "trtype": "TCP", 00:17:47.574 "adrfam": "IPv4", 00:17:47.574 "traddr": "10.0.0.2", 00:17:47.574 "trsvcid": "4420" 00:17:47.574 }, 00:17:47.574 "peer_address": { 00:17:47.574 "trtype": "TCP", 00:17:47.574 "adrfam": "IPv4", 00:17:47.574 "traddr": "10.0.0.1", 00:17:47.574 "trsvcid": "60326" 00:17:47.574 }, 00:17:47.574 "auth": { 00:17:47.574 "state": "completed", 00:17:47.574 "digest": "sha384", 00:17:47.574 "dhgroup": "ffdhe3072" 00:17:47.574 } 00:17:47.574 } 00:17:47.574 ]' 00:17:47.574 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.574 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.574 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.574 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.574 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.832 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.832 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.832 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.832 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:47.832 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:48.398 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.398 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.398 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.398 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.398 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.398 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.398 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.399 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.656 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.914 00:17:48.914 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.914 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.914 07:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.172 { 00:17:49.172 "cntlid": 71, 00:17:49.172 "qid": 0, 00:17:49.172 "state": "enabled", 00:17:49.172 "thread": "nvmf_tgt_poll_group_000", 00:17:49.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:49.172 "listen_address": { 00:17:49.172 "trtype": "TCP", 00:17:49.172 "adrfam": "IPv4", 00:17:49.172 "traddr": "10.0.0.2", 00:17:49.172 "trsvcid": "4420" 00:17:49.172 }, 00:17:49.172 "peer_address": { 00:17:49.172 "trtype": "TCP", 00:17:49.172 "adrfam": "IPv4", 00:17:49.172 "traddr": "10.0.0.1", 00:17:49.172 "trsvcid": "60358" 00:17:49.172 }, 00:17:49.172 "auth": { 00:17:49.172 "state": "completed", 00:17:49.172 "digest": "sha384", 00:17:49.172 "dhgroup": "ffdhe3072" 00:17:49.172 } 00:17:49.172 } 00:17:49.172 ]' 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.172 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.430 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.430 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.430 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:49.430 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:49.994 07:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.994 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.994 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.994 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.994 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.994 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.994 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.994 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:49.994 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.250 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.508 00:17:50.508 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.508 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.508 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.765 { 00:17:50.765 "cntlid": 73, 00:17:50.765 "qid": 0, 00:17:50.765 "state": "enabled", 00:17:50.765 "thread": "nvmf_tgt_poll_group_000", 00:17:50.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:50.765 "listen_address": { 00:17:50.765 "trtype": "TCP", 00:17:50.765 "adrfam": "IPv4", 00:17:50.765 "traddr": "10.0.0.2", 00:17:50.765 "trsvcid": "4420" 00:17:50.765 }, 00:17:50.765 "peer_address": { 00:17:50.765 "trtype": "TCP", 00:17:50.765 "adrfam": "IPv4", 00:17:50.765 "traddr": "10.0.0.1", 00:17:50.765 "trsvcid": "60392" 00:17:50.765 }, 00:17:50.765 "auth": { 00:17:50.765 "state": "completed", 00:17:50.765 "digest": "sha384", 00:17:50.765 "dhgroup": "ffdhe4096" 00:17:50.765 } 00:17:50.765 } 00:17:50.765 ]' 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.765 07:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.023 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:51.023 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:51.599 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.599 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.599 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.599 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.599 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.599 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.599 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:51.599 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.857 07:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.114 00:17:52.114 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.114 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.114 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.371 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.372 { 00:17:52.372 "cntlid": 75, 00:17:52.372 "qid": 0, 00:17:52.372 "state": "enabled", 00:17:52.372 "thread": "nvmf_tgt_poll_group_000", 00:17:52.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.372 "listen_address": { 00:17:52.372 "trtype": "TCP", 00:17:52.372 "adrfam": "IPv4", 00:17:52.372 "traddr": "10.0.0.2", 00:17:52.372 "trsvcid": "4420" 00:17:52.372 }, 00:17:52.372 "peer_address": { 00:17:52.372 "trtype": "TCP", 00:17:52.372 "adrfam": "IPv4", 00:17:52.372 "traddr": "10.0.0.1", 00:17:52.372 "trsvcid": "60402" 00:17:52.372 }, 00:17:52.372 "auth": { 00:17:52.372 "state": "completed", 00:17:52.372 "digest": "sha384", 00:17:52.372 "dhgroup": "ffdhe4096" 00:17:52.372 } 00:17:52.372 } 00:17:52.372 ]' 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.372 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.629 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:52.629 07:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:53.195 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.195 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.195 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.195 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.195 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.195 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.195 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.195 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.453 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.711 00:17:53.711 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.711 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.711 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.969 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.969 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.969 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.969 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.969 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.969 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.969 { 00:17:53.969 "cntlid": 77, 00:17:53.969 "qid": 0, 00:17:53.969 "state": "enabled", 00:17:53.969 "thread": "nvmf_tgt_poll_group_000", 00:17:53.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:53.969 "listen_address": { 00:17:53.969 "trtype": "TCP", 00:17:53.969 "adrfam": "IPv4", 00:17:53.969 "traddr": "10.0.0.2", 00:17:53.969 "trsvcid": "4420" 00:17:53.969 }, 00:17:53.969 "peer_address": { 00:17:53.969 "trtype": "TCP", 00:17:53.969 "adrfam": "IPv4", 00:17:53.969 "traddr": "10.0.0.1", 00:17:53.969 "trsvcid": "60420" 00:17:53.969 }, 00:17:53.969 "auth": { 00:17:53.969 "state": "completed", 00:17:53.969 "digest": "sha384", 00:17:53.969 "dhgroup": "ffdhe4096" 00:17:53.969 } 00:17:53.969 } 00:17:53.969 ]' 00:17:53.969 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.969 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.969 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.969 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.969 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.969 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.969 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.969 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.226 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:54.226 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:17:54.793 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.793 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.793 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.793 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.793 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.793 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.793 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.793 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.051 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:55.051 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.051 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.051 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:55.051 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.051 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.051 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:55.051 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.051 07:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.051 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.051 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.051 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.051 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.309 00:17:55.309 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.309 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.309 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.568 { 00:17:55.568 "cntlid": 79, 00:17:55.568 "qid": 0, 00:17:55.568 "state": "enabled", 00:17:55.568 "thread": "nvmf_tgt_poll_group_000", 00:17:55.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:55.568 "listen_address": { 00:17:55.568 "trtype": "TCP", 00:17:55.568 "adrfam": "IPv4", 00:17:55.568 "traddr": "10.0.0.2", 00:17:55.568 "trsvcid": "4420" 00:17:55.568 }, 00:17:55.568 "peer_address": { 00:17:55.568 "trtype": "TCP", 00:17:55.568 "adrfam": "IPv4", 00:17:55.568 "traddr": "10.0.0.1", 00:17:55.568 "trsvcid": "60452" 00:17:55.568 }, 00:17:55.568 "auth": { 00:17:55.568 "state": "completed", 00:17:55.568 "digest": "sha384", 00:17:55.568 "dhgroup": "ffdhe4096" 00:17:55.568 } 00:17:55.568 } 00:17:55.568 ]' 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.568 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.827 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:55.827 07:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:17:56.395 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.395 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.395 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.395 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.395 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.395 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.395 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.395 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:56.395 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.654 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.913 00:17:56.913 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.913 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.913 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.171 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.171 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.171 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.171 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.171 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.171 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.171 { 00:17:57.171 "cntlid": 81, 00:17:57.171 "qid": 0, 00:17:57.171 "state": "enabled", 00:17:57.171 "thread": "nvmf_tgt_poll_group_000", 00:17:57.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:57.171 "listen_address": { 00:17:57.171 "trtype": "TCP", 00:17:57.171 "adrfam": "IPv4", 00:17:57.171 "traddr": "10.0.0.2", 00:17:57.171 "trsvcid": "4420" 00:17:57.171 }, 00:17:57.171 "peer_address": { 00:17:57.171 "trtype": "TCP", 00:17:57.171 "adrfam": "IPv4", 00:17:57.171 "traddr": "10.0.0.1", 00:17:57.171 "trsvcid": "34330" 00:17:57.171 }, 00:17:57.171 "auth": { 00:17:57.171 "state": "completed", 00:17:57.171 "digest": "sha384", 00:17:57.171 "dhgroup": "ffdhe6144" 00:17:57.171 } 00:17:57.171 } 00:17:57.171 ]' 00:17:57.171 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.171 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.171 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.171 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.171 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.172 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.172 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.172 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.430 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:57.430 07:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:17:57.997 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.997 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.997 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.997 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.997 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.997 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.997 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:57.997 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.255 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.513 00:17:58.513 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.513 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.513 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.771 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.771 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.771 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.771 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.771 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.771 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.771 { 00:17:58.771 "cntlid": 83, 00:17:58.771 "qid": 0, 00:17:58.771 "state": "enabled", 00:17:58.771 "thread": "nvmf_tgt_poll_group_000", 00:17:58.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:58.771 "listen_address": { 00:17:58.771 "trtype": "TCP", 00:17:58.771 "adrfam": "IPv4", 00:17:58.771 "traddr": "10.0.0.2", 00:17:58.771 "trsvcid": "4420" 00:17:58.771 }, 00:17:58.771 "peer_address": { 00:17:58.771 "trtype": "TCP", 00:17:58.771 "adrfam": "IPv4", 00:17:58.771 "traddr": "10.0.0.1", 00:17:58.771 "trsvcid": "34340" 00:17:58.771 }, 00:17:58.771 "auth": { 00:17:58.771 "state": "completed", 00:17:58.771 "digest": "sha384", 00:17:58.771 "dhgroup": "ffdhe6144" 00:17:58.771 } 00:17:58.771 } 00:17:58.771 ]' 00:17:58.771 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.771 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.771 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.029 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.029 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.029 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.029 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.029 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.287 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:59.287 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.854 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.421 00:18:00.421 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.421 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.421 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.421 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.421 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.421 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.421 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.421 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.421 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.421 { 00:18:00.421 "cntlid": 85, 00:18:00.421 "qid": 0, 00:18:00.421 "state": "enabled", 00:18:00.421 "thread": "nvmf_tgt_poll_group_000", 00:18:00.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:00.421 "listen_address": { 00:18:00.421 "trtype": "TCP", 00:18:00.421 "adrfam": "IPv4", 00:18:00.421 "traddr": "10.0.0.2", 00:18:00.421 "trsvcid": "4420" 00:18:00.421 }, 00:18:00.421 "peer_address": { 00:18:00.421 "trtype": "TCP", 00:18:00.421 "adrfam": "IPv4", 00:18:00.421 "traddr": "10.0.0.1", 00:18:00.421 "trsvcid": "34378" 00:18:00.421 }, 00:18:00.421 "auth": { 00:18:00.421 "state": "completed", 00:18:00.421 "digest": "sha384", 00:18:00.421 "dhgroup": "ffdhe6144" 00:18:00.421 } 00:18:00.421 } 00:18:00.421 ]' 00:18:00.421 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.422 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.422 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.680 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.680 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.680 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.680 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.680 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.939 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:00.939 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.520 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.088 00:18:02.088 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.088 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.088 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.088 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.088 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.088 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.088 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.088 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.088 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.088 { 00:18:02.088 "cntlid": 87, 00:18:02.088 "qid": 0, 00:18:02.088 "state": "enabled", 00:18:02.088 "thread": "nvmf_tgt_poll_group_000", 00:18:02.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:02.088 "listen_address": { 00:18:02.088 "trtype": "TCP", 00:18:02.088 "adrfam": "IPv4", 00:18:02.088 "traddr": "10.0.0.2", 00:18:02.088 "trsvcid": "4420" 00:18:02.088 }, 00:18:02.088 "peer_address": { 00:18:02.088 "trtype": "TCP", 00:18:02.088 "adrfam": "IPv4", 00:18:02.088 "traddr": "10.0.0.1", 00:18:02.088 "trsvcid": "34408" 00:18:02.088 }, 00:18:02.088 "auth": { 00:18:02.088 "state": "completed", 00:18:02.088 "digest": "sha384", 00:18:02.088 "dhgroup": "ffdhe6144" 00:18:02.088 } 00:18:02.088 } 00:18:02.088 ]' 00:18:02.088 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.088 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.088 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.347 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.347 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.347 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.347 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.347 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.605 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:02.605 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.172 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.740 00:18:03.740 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.740 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.740 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.998 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.998 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.998 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.998 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.998 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.998 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.998 { 00:18:03.998 "cntlid": 89, 00:18:03.998 "qid": 0, 00:18:03.998 "state": "enabled", 00:18:03.998 "thread": "nvmf_tgt_poll_group_000", 00:18:03.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:03.998 "listen_address": { 00:18:03.998 "trtype": "TCP", 00:18:03.998 "adrfam": "IPv4", 00:18:03.998 "traddr": "10.0.0.2", 00:18:03.998 "trsvcid": "4420" 00:18:03.998 }, 00:18:03.998 "peer_address": { 00:18:03.998 "trtype": "TCP", 00:18:03.998 "adrfam": "IPv4", 00:18:03.998 "traddr": "10.0.0.1", 00:18:03.998 "trsvcid": "34446" 00:18:03.998 }, 00:18:03.998 "auth": { 00:18:03.998 "state": "completed", 00:18:03.998 "digest": "sha384", 00:18:03.998 "dhgroup": "ffdhe8192" 00:18:03.998 } 00:18:03.998 } 00:18:03.998 ]' 00:18:03.998 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.998 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.998 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.998 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.998 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.257 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.257 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.257 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.257 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:04.257 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:04.823 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.823 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.823 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.823 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.823 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.823 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.823 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:04.823 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.081 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.647 00:18:05.647 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.647 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.647 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.905 { 00:18:05.905 "cntlid": 91, 00:18:05.905 "qid": 0, 00:18:05.905 "state": "enabled", 00:18:05.905 "thread": "nvmf_tgt_poll_group_000", 00:18:05.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:05.905 "listen_address": { 00:18:05.905 "trtype": "TCP", 00:18:05.905 "adrfam": "IPv4", 00:18:05.905 "traddr": "10.0.0.2", 00:18:05.905 "trsvcid": "4420" 00:18:05.905 }, 00:18:05.905 "peer_address": { 00:18:05.905 "trtype": "TCP", 00:18:05.905 "adrfam": "IPv4", 00:18:05.905 "traddr": "10.0.0.1", 00:18:05.905 "trsvcid": "34468" 00:18:05.905 }, 00:18:05.905 "auth": { 00:18:05.905 "state": "completed", 00:18:05.905 "digest": "sha384", 00:18:05.905 "dhgroup": "ffdhe8192" 00:18:05.905 } 00:18:05.905 } 00:18:05.905 ]' 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.905 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.906 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.163 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:06.163 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:06.729 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.729 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.729 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.729 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.729 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.729 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.729 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:06.729 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.988 07:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.554 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.554 { 00:18:07.554 "cntlid": 93, 00:18:07.554 "qid": 0, 00:18:07.554 "state": "enabled", 00:18:07.554 "thread": "nvmf_tgt_poll_group_000", 00:18:07.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:07.554 "listen_address": { 00:18:07.554 "trtype": "TCP", 00:18:07.554 "adrfam": "IPv4", 00:18:07.554 "traddr": "10.0.0.2", 00:18:07.554 "trsvcid": "4420" 00:18:07.554 }, 00:18:07.554 "peer_address": { 00:18:07.554 "trtype": "TCP", 00:18:07.554 "adrfam": "IPv4", 00:18:07.554 "traddr": "10.0.0.1", 00:18:07.554 "trsvcid": "56026" 00:18:07.554 }, 00:18:07.554 "auth": { 00:18:07.554 "state": "completed", 00:18:07.554 "digest": "sha384", 00:18:07.554 "dhgroup": "ffdhe8192" 00:18:07.554 } 00:18:07.554 } 00:18:07.554 ]' 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.554 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.813 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.813 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.813 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.813 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.813 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.813 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:07.813 07:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:08.380 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.380 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.380 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.380 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.380 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.380 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.380 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.380 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.639 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.205 00:18:09.205 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.205 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.205 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.462 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.463 { 00:18:09.463 "cntlid": 95, 00:18:09.463 "qid": 0, 00:18:09.463 "state": "enabled", 00:18:09.463 "thread": "nvmf_tgt_poll_group_000", 00:18:09.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:09.463 "listen_address": { 00:18:09.463 "trtype": "TCP", 00:18:09.463 "adrfam": "IPv4", 00:18:09.463 "traddr": "10.0.0.2", 00:18:09.463 "trsvcid": "4420" 00:18:09.463 }, 00:18:09.463 "peer_address": { 00:18:09.463 "trtype": "TCP", 00:18:09.463 "adrfam": "IPv4", 00:18:09.463 "traddr": "10.0.0.1", 00:18:09.463 "trsvcid": "56056" 00:18:09.463 }, 00:18:09.463 "auth": { 00:18:09.463 "state": "completed", 00:18:09.463 "digest": "sha384", 00:18:09.463 "dhgroup": "ffdhe8192" 00:18:09.463 } 00:18:09.463 } 00:18:09.463 ]' 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.463 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.722 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:09.722 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:10.289 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.289 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.289 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.289 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.289 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.289 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:10.289 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.289 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.289 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:10.289 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:10.547 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:10.547 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.548 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.548 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:10.548 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.548 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.548 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.548 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.548 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.548 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.548 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.548 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.548 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.806 00:18:10.806 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.806 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.806 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.064 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.064 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.064 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.064 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.064 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.064 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.064 { 00:18:11.064 "cntlid": 97, 00:18:11.064 "qid": 0, 00:18:11.064 "state": "enabled", 00:18:11.064 "thread": "nvmf_tgt_poll_group_000", 00:18:11.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:11.064 "listen_address": { 00:18:11.064 "trtype": "TCP", 00:18:11.064 "adrfam": "IPv4", 00:18:11.064 "traddr": "10.0.0.2", 00:18:11.064 "trsvcid": "4420" 00:18:11.064 }, 00:18:11.064 "peer_address": { 00:18:11.064 "trtype": "TCP", 00:18:11.064 "adrfam": "IPv4", 00:18:11.064 "traddr": "10.0.0.1", 00:18:11.064 "trsvcid": "56072" 00:18:11.064 }, 00:18:11.064 "auth": { 00:18:11.064 "state": "completed", 00:18:11.064 "digest": "sha512", 00:18:11.064 "dhgroup": "null" 00:18:11.064 } 00:18:11.064 } 00:18:11.064 ]' 00:18:11.064 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.064 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.064 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.064 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:11.064 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.064 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.064 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.064 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.321 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:11.321 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:11.888 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.888 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.888 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.888 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.888 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.889 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.889 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:11.889 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:12.150 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.151 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.412 00:18:12.412 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.412 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.412 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.412 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.412 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.412 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.412 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.671 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.671 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.671 { 00:18:12.671 "cntlid": 99, 00:18:12.671 "qid": 0, 00:18:12.671 "state": "enabled", 00:18:12.671 "thread": "nvmf_tgt_poll_group_000", 00:18:12.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:12.671 "listen_address": { 00:18:12.671 "trtype": "TCP", 00:18:12.671 "adrfam": "IPv4", 00:18:12.671 "traddr": "10.0.0.2", 00:18:12.671 "trsvcid": "4420" 00:18:12.671 }, 00:18:12.671 "peer_address": { 00:18:12.671 "trtype": "TCP", 00:18:12.671 "adrfam": "IPv4", 00:18:12.671 "traddr": "10.0.0.1", 00:18:12.671 "trsvcid": "56088" 00:18:12.671 }, 00:18:12.671 "auth": { 00:18:12.671 "state": "completed", 00:18:12.671 "digest": "sha512", 00:18:12.671 "dhgroup": "null" 00:18:12.671 } 00:18:12.671 } 00:18:12.671 ]' 00:18:12.671 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.671 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.671 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.671 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:12.671 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.671 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.671 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.671 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.933 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:12.933 07:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:13.499 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.499 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.499 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.499 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.499 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.499 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.499 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:13.499 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.757 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.016 00:18:14.016 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.016 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.016 07:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.016 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.016 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.016 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.016 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.016 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.016 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.016 { 00:18:14.016 "cntlid": 101, 00:18:14.016 "qid": 0, 00:18:14.016 "state": "enabled", 00:18:14.016 "thread": "nvmf_tgt_poll_group_000", 00:18:14.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:14.016 "listen_address": { 00:18:14.016 "trtype": "TCP", 00:18:14.016 "adrfam": "IPv4", 00:18:14.016 "traddr": "10.0.0.2", 00:18:14.016 "trsvcid": "4420" 00:18:14.016 }, 00:18:14.016 "peer_address": { 00:18:14.016 "trtype": "TCP", 00:18:14.016 "adrfam": "IPv4", 00:18:14.016 "traddr": "10.0.0.1", 00:18:14.016 "trsvcid": "56120" 00:18:14.016 }, 00:18:14.016 "auth": { 00:18:14.016 "state": "completed", 00:18:14.016 "digest": "sha512", 00:18:14.016 "dhgroup": "null" 00:18:14.016 } 00:18:14.016 } 00:18:14.016 ]' 00:18:14.016 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.274 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.274 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.274 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:14.274 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.274 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.274 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.274 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.533 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:14.533 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:15.100 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.100 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.100 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.100 07:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.100 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.100 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.100 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:15.100 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:15.100 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:15.100 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.100 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.100 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:15.100 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.100 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.100 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:15.359 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.359 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.359 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.359 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.359 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.359 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.359 00:18:15.359 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.359 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.359 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.618 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.618 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.618 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.618 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.618 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.618 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.618 { 00:18:15.618 "cntlid": 103, 00:18:15.618 "qid": 0, 00:18:15.618 "state": "enabled", 00:18:15.618 "thread": "nvmf_tgt_poll_group_000", 00:18:15.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:15.618 "listen_address": { 00:18:15.618 "trtype": "TCP", 00:18:15.618 "adrfam": "IPv4", 00:18:15.618 "traddr": "10.0.0.2", 00:18:15.618 "trsvcid": "4420" 00:18:15.618 }, 00:18:15.618 "peer_address": { 00:18:15.618 "trtype": "TCP", 00:18:15.618 "adrfam": "IPv4", 00:18:15.618 "traddr": "10.0.0.1", 00:18:15.618 "trsvcid": "56144" 00:18:15.618 }, 00:18:15.618 "auth": { 00:18:15.618 "state": "completed", 00:18:15.618 "digest": "sha512", 00:18:15.618 "dhgroup": "null" 00:18:15.618 } 00:18:15.618 } 00:18:15.618 ]' 00:18:15.618 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.618 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.618 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.618 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:15.618 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.876 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.876 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.876 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.876 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:15.876 07:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:16.444 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.444 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.444 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.444 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.444 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.444 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.444 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.444 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:16.444 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.705 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.963 00:18:16.963 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.963 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.963 07:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.222 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.222 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.222 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.222 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.222 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.222 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.222 { 00:18:17.222 "cntlid": 105, 00:18:17.222 "qid": 0, 00:18:17.222 "state": "enabled", 00:18:17.222 "thread": "nvmf_tgt_poll_group_000", 00:18:17.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:17.222 "listen_address": { 00:18:17.222 "trtype": "TCP", 00:18:17.222 "adrfam": "IPv4", 00:18:17.222 "traddr": "10.0.0.2", 00:18:17.222 "trsvcid": "4420" 00:18:17.222 }, 00:18:17.222 "peer_address": { 00:18:17.222 "trtype": "TCP", 00:18:17.222 "adrfam": "IPv4", 00:18:17.222 "traddr": "10.0.0.1", 00:18:17.222 "trsvcid": "36826" 00:18:17.222 }, 00:18:17.222 "auth": { 00:18:17.222 "state": "completed", 00:18:17.222 "digest": "sha512", 00:18:17.222 "dhgroup": "ffdhe2048" 00:18:17.222 } 00:18:17.222 } 00:18:17.222 ]' 00:18:17.222 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.222 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.222 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.222 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.222 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.481 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.481 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.482 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.482 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:17.482 07:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:18.049 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.049 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.049 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.049 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.049 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.049 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.049 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:18.049 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.308 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.567 00:18:18.567 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.567 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.567 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.825 { 00:18:18.825 "cntlid": 107, 00:18:18.825 "qid": 0, 00:18:18.825 "state": "enabled", 00:18:18.825 "thread": "nvmf_tgt_poll_group_000", 00:18:18.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:18.825 "listen_address": { 00:18:18.825 "trtype": "TCP", 00:18:18.825 "adrfam": "IPv4", 00:18:18.825 "traddr": "10.0.0.2", 00:18:18.825 "trsvcid": "4420" 00:18:18.825 }, 00:18:18.825 "peer_address": { 00:18:18.825 "trtype": "TCP", 00:18:18.825 "adrfam": "IPv4", 00:18:18.825 "traddr": "10.0.0.1", 00:18:18.825 "trsvcid": "36856" 00:18:18.825 }, 00:18:18.825 "auth": { 00:18:18.825 "state": "completed", 00:18:18.825 "digest": "sha512", 00:18:18.825 "dhgroup": "ffdhe2048" 00:18:18.825 } 00:18:18.825 } 00:18:18.825 ]' 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.825 07:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.084 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:19.084 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:19.651 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.651 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.651 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.651 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.651 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.651 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.651 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:19.651 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.910 07:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.169 00:18:20.169 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.169 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.169 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.427 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.427 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.427 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.427 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.427 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.427 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.427 { 00:18:20.427 "cntlid": 109, 00:18:20.427 "qid": 0, 00:18:20.427 "state": "enabled", 00:18:20.427 "thread": "nvmf_tgt_poll_group_000", 00:18:20.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:20.427 "listen_address": { 00:18:20.427 "trtype": "TCP", 00:18:20.427 "adrfam": "IPv4", 00:18:20.427 "traddr": "10.0.0.2", 00:18:20.427 "trsvcid": "4420" 00:18:20.427 }, 00:18:20.427 "peer_address": { 00:18:20.427 "trtype": "TCP", 00:18:20.427 "adrfam": "IPv4", 00:18:20.427 "traddr": "10.0.0.1", 00:18:20.427 "trsvcid": "36878" 00:18:20.427 }, 00:18:20.427 "auth": { 00:18:20.427 "state": "completed", 00:18:20.427 "digest": "sha512", 00:18:20.427 "dhgroup": "ffdhe2048" 00:18:20.427 } 00:18:20.427 } 00:18:20.427 ]' 00:18:20.428 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.428 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.428 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.428 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.428 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.428 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.428 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.428 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.686 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:20.686 07:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:21.267 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.267 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.267 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.267 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.267 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.267 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.267 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:21.267 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.529 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.787 00:18:21.787 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.787 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.787 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.046 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.046 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.046 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.046 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.046 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.046 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.046 { 00:18:22.046 "cntlid": 111, 00:18:22.046 "qid": 0, 00:18:22.046 "state": "enabled", 00:18:22.046 "thread": "nvmf_tgt_poll_group_000", 00:18:22.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:22.046 "listen_address": { 00:18:22.046 "trtype": "TCP", 00:18:22.046 "adrfam": "IPv4", 00:18:22.046 "traddr": "10.0.0.2", 00:18:22.046 "trsvcid": "4420" 00:18:22.046 }, 00:18:22.046 "peer_address": { 00:18:22.046 "trtype": "TCP", 00:18:22.046 "adrfam": "IPv4", 00:18:22.046 "traddr": "10.0.0.1", 00:18:22.046 "trsvcid": "36900" 00:18:22.046 }, 00:18:22.046 "auth": { 00:18:22.046 "state": "completed", 00:18:22.046 "digest": "sha512", 00:18:22.046 "dhgroup": "ffdhe2048" 00:18:22.046 } 00:18:22.046 } 00:18:22.046 ]' 00:18:22.046 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.046 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.046 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.046 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.046 07:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.046 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.046 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.046 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.305 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:22.305 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:22.872 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.872 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.872 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.872 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.872 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.872 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:22.872 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.872 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:22.872 07:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.130 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.388 00:18:23.388 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.388 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.388 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.647 { 00:18:23.647 "cntlid": 113, 00:18:23.647 "qid": 0, 00:18:23.647 "state": "enabled", 00:18:23.647 "thread": "nvmf_tgt_poll_group_000", 00:18:23.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:23.647 "listen_address": { 00:18:23.647 "trtype": "TCP", 00:18:23.647 "adrfam": "IPv4", 00:18:23.647 "traddr": "10.0.0.2", 00:18:23.647 "trsvcid": "4420" 00:18:23.647 }, 00:18:23.647 "peer_address": { 00:18:23.647 "trtype": "TCP", 00:18:23.647 "adrfam": "IPv4", 00:18:23.647 "traddr": "10.0.0.1", 00:18:23.647 "trsvcid": "36930" 00:18:23.647 }, 00:18:23.647 "auth": { 00:18:23.647 "state": "completed", 00:18:23.647 "digest": "sha512", 00:18:23.647 "dhgroup": "ffdhe3072" 00:18:23.647 } 00:18:23.647 } 00:18:23.647 ]' 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.647 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.906 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:23.906 07:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:24.505 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.505 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.505 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.505 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.505 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.505 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.505 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:24.505 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.762 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.019 00:18:25.019 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.019 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.019 07:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.019 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.020 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.020 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.020 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.020 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.020 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.020 { 00:18:25.020 "cntlid": 115, 00:18:25.020 "qid": 0, 00:18:25.020 "state": "enabled", 00:18:25.020 "thread": "nvmf_tgt_poll_group_000", 00:18:25.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:25.020 "listen_address": { 00:18:25.020 "trtype": "TCP", 00:18:25.020 "adrfam": "IPv4", 00:18:25.020 "traddr": "10.0.0.2", 00:18:25.020 "trsvcid": "4420" 00:18:25.020 }, 00:18:25.020 "peer_address": { 00:18:25.020 "trtype": "TCP", 00:18:25.020 "adrfam": "IPv4", 00:18:25.020 "traddr": "10.0.0.1", 00:18:25.020 "trsvcid": "36960" 00:18:25.020 }, 00:18:25.020 "auth": { 00:18:25.020 "state": "completed", 00:18:25.020 "digest": "sha512", 00:18:25.020 "dhgroup": "ffdhe3072" 00:18:25.020 } 00:18:25.020 } 00:18:25.020 ]' 00:18:25.020 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.277 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.277 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.277 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.277 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.277 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.277 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.277 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.534 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:25.534 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:26.096 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.096 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.096 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.096 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.096 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.096 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.096 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:26.096 07:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:26.096 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:26.096 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.096 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.096 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:26.096 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:26.096 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.096 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.096 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.096 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.353 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.353 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.353 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.353 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.609 00:18:26.610 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.610 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.610 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.610 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.610 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.610 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.610 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.610 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.610 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.610 { 00:18:26.610 "cntlid": 117, 00:18:26.610 "qid": 0, 00:18:26.610 "state": "enabled", 00:18:26.610 "thread": "nvmf_tgt_poll_group_000", 00:18:26.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:26.610 "listen_address": { 00:18:26.610 "trtype": "TCP", 00:18:26.610 "adrfam": "IPv4", 00:18:26.610 "traddr": "10.0.0.2", 00:18:26.610 "trsvcid": "4420" 00:18:26.610 }, 00:18:26.610 "peer_address": { 00:18:26.610 "trtype": "TCP", 00:18:26.610 "adrfam": "IPv4", 00:18:26.610 "traddr": "10.0.0.1", 00:18:26.610 "trsvcid": "35472" 00:18:26.610 }, 00:18:26.610 "auth": { 00:18:26.610 "state": "completed", 00:18:26.610 "digest": "sha512", 00:18:26.610 "dhgroup": "ffdhe3072" 00:18:26.610 } 00:18:26.610 } 00:18:26.610 ]' 00:18:26.610 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.867 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.867 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.867 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.867 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.867 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.867 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.867 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.126 07:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:27.126 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:27.692 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.692 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.692 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.692 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.692 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.692 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.692 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:27.692 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.951 07:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.210 00:18:28.210 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.210 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.210 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.210 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.210 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.210 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.210 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.210 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.210 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.210 { 00:18:28.210 "cntlid": 119, 00:18:28.210 "qid": 0, 00:18:28.210 "state": "enabled", 00:18:28.210 "thread": "nvmf_tgt_poll_group_000", 00:18:28.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:28.210 "listen_address": { 00:18:28.210 "trtype": "TCP", 00:18:28.210 "adrfam": "IPv4", 00:18:28.210 "traddr": "10.0.0.2", 00:18:28.210 "trsvcid": "4420" 00:18:28.210 }, 00:18:28.210 "peer_address": { 00:18:28.210 "trtype": "TCP", 00:18:28.210 "adrfam": "IPv4", 00:18:28.210 "traddr": "10.0.0.1", 00:18:28.210 "trsvcid": "35502" 00:18:28.210 }, 00:18:28.210 "auth": { 00:18:28.210 "state": "completed", 00:18:28.210 "digest": "sha512", 00:18:28.210 "dhgroup": "ffdhe3072" 00:18:28.210 } 00:18:28.210 } 00:18:28.210 ]' 00:18:28.210 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.469 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.469 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.469 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.469 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.469 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.469 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.469 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.727 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:28.727 07:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.294 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.553 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.553 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.553 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.553 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.812 00:18:29.812 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.812 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.812 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.812 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.812 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.812 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.812 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.812 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.812 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.812 { 00:18:29.812 "cntlid": 121, 00:18:29.812 "qid": 0, 00:18:29.812 "state": "enabled", 00:18:29.812 "thread": "nvmf_tgt_poll_group_000", 00:18:29.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:29.812 "listen_address": { 00:18:29.812 "trtype": "TCP", 00:18:29.812 "adrfam": "IPv4", 00:18:29.812 "traddr": "10.0.0.2", 00:18:29.812 "trsvcid": "4420" 00:18:29.812 }, 00:18:29.812 "peer_address": { 00:18:29.812 "trtype": "TCP", 00:18:29.812 "adrfam": "IPv4", 00:18:29.812 "traddr": "10.0.0.1", 00:18:29.812 "trsvcid": "35518" 00:18:29.812 }, 00:18:29.812 "auth": { 00:18:29.812 "state": "completed", 00:18:29.812 "digest": "sha512", 00:18:29.812 "dhgroup": "ffdhe4096" 00:18:29.812 } 00:18:29.812 } 00:18:29.812 ]' 00:18:29.812 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.070 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.070 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.070 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.070 07:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.070 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.070 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.070 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.328 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:30.328 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:30.894 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.894 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:30.894 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.895 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.153 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.153 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.153 07:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.412 00:18:31.412 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.412 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.412 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.412 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.412 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.412 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.412 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.412 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.412 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.412 { 00:18:31.412 "cntlid": 123, 00:18:31.412 "qid": 0, 00:18:31.412 "state": "enabled", 00:18:31.412 "thread": "nvmf_tgt_poll_group_000", 00:18:31.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:31.412 "listen_address": { 00:18:31.412 "trtype": "TCP", 00:18:31.412 "adrfam": "IPv4", 00:18:31.412 "traddr": "10.0.0.2", 00:18:31.412 "trsvcid": "4420" 00:18:31.412 }, 00:18:31.412 "peer_address": { 00:18:31.412 "trtype": "TCP", 00:18:31.412 "adrfam": "IPv4", 00:18:31.412 "traddr": "10.0.0.1", 00:18:31.412 "trsvcid": "35532" 00:18:31.412 }, 00:18:31.412 "auth": { 00:18:31.412 "state": "completed", 00:18:31.412 "digest": "sha512", 00:18:31.412 "dhgroup": "ffdhe4096" 00:18:31.412 } 00:18:31.412 } 00:18:31.412 ]' 00:18:31.412 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.670 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.670 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.670 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.670 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.670 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.670 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.670 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.930 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:31.930 07:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:32.497 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.498 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.498 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.498 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.498 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.498 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.498 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:32.498 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:32.498 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:32.498 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.498 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.757 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:32.757 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.757 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.757 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.757 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.757 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.757 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.757 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.757 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.757 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.017 00:18:33.017 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.017 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.017 07:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.017 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.017 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.017 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.017 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.017 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.017 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.017 { 00:18:33.017 "cntlid": 125, 00:18:33.017 "qid": 0, 00:18:33.017 "state": "enabled", 00:18:33.017 "thread": "nvmf_tgt_poll_group_000", 00:18:33.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:33.017 "listen_address": { 00:18:33.017 "trtype": "TCP", 00:18:33.017 "adrfam": "IPv4", 00:18:33.017 "traddr": "10.0.0.2", 00:18:33.017 "trsvcid": "4420" 00:18:33.017 }, 00:18:33.017 "peer_address": { 00:18:33.017 "trtype": "TCP", 00:18:33.017 "adrfam": "IPv4", 00:18:33.017 "traddr": "10.0.0.1", 00:18:33.017 "trsvcid": "35546" 00:18:33.017 }, 00:18:33.017 "auth": { 00:18:33.017 "state": "completed", 00:18:33.017 "digest": "sha512", 00:18:33.017 "dhgroup": "ffdhe4096" 00:18:33.017 } 00:18:33.017 } 00:18:33.017 ]' 00:18:33.017 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.275 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.275 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.275 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.275 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.275 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.275 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.275 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.533 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:33.533 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:34.099 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.099 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:34.099 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.099 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.099 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.099 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.099 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.099 07:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.358 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.617 00:18:34.617 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.617 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.617 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.617 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.617 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.617 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.617 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.617 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.617 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.617 { 00:18:34.617 "cntlid": 127, 00:18:34.617 "qid": 0, 00:18:34.617 "state": "enabled", 00:18:34.617 "thread": "nvmf_tgt_poll_group_000", 00:18:34.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:34.617 "listen_address": { 00:18:34.617 "trtype": "TCP", 00:18:34.617 "adrfam": "IPv4", 00:18:34.617 "traddr": "10.0.0.2", 00:18:34.617 "trsvcid": "4420" 00:18:34.617 }, 00:18:34.617 "peer_address": { 00:18:34.617 "trtype": "TCP", 00:18:34.617 "adrfam": "IPv4", 00:18:34.617 "traddr": "10.0.0.1", 00:18:34.617 "trsvcid": "35574" 00:18:34.617 }, 00:18:34.617 "auth": { 00:18:34.617 "state": "completed", 00:18:34.617 "digest": "sha512", 00:18:34.617 "dhgroup": "ffdhe4096" 00:18:34.617 } 00:18:34.617 } 00:18:34.617 ]' 00:18:34.617 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.875 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.875 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.875 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.875 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.875 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.875 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.875 07:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.134 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:35.134 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:35.702 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.702 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.702 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.702 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.702 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.702 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.702 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.702 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:35.702 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:35.959 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:35.959 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.960 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.960 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:35.960 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:35.960 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.960 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.960 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.960 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.960 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.960 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.960 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.960 07:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.218 00:18:36.218 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.218 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.218 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.474 { 00:18:36.474 "cntlid": 129, 00:18:36.474 "qid": 0, 00:18:36.474 "state": "enabled", 00:18:36.474 "thread": "nvmf_tgt_poll_group_000", 00:18:36.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:36.474 "listen_address": { 00:18:36.474 "trtype": "TCP", 00:18:36.474 "adrfam": "IPv4", 00:18:36.474 "traddr": "10.0.0.2", 00:18:36.474 "trsvcid": "4420" 00:18:36.474 }, 00:18:36.474 "peer_address": { 00:18:36.474 "trtype": "TCP", 00:18:36.474 "adrfam": "IPv4", 00:18:36.474 "traddr": "10.0.0.1", 00:18:36.474 "trsvcid": "43574" 00:18:36.474 }, 00:18:36.474 "auth": { 00:18:36.474 "state": "completed", 00:18:36.474 "digest": "sha512", 00:18:36.474 "dhgroup": "ffdhe6144" 00:18:36.474 } 00:18:36.474 } 00:18:36.474 ]' 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.474 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.733 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:36.733 07:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:37.299 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.299 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:37.299 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.299 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.299 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.299 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.299 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:37.299 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.557 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.814 00:18:37.814 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.814 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.814 07:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.072 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.072 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.072 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.072 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.072 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.072 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.072 { 00:18:38.072 "cntlid": 131, 00:18:38.072 "qid": 0, 00:18:38.072 "state": "enabled", 00:18:38.072 "thread": "nvmf_tgt_poll_group_000", 00:18:38.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:38.072 "listen_address": { 00:18:38.072 "trtype": "TCP", 00:18:38.072 "adrfam": "IPv4", 00:18:38.072 "traddr": "10.0.0.2", 00:18:38.072 "trsvcid": "4420" 00:18:38.072 }, 00:18:38.072 "peer_address": { 00:18:38.072 "trtype": "TCP", 00:18:38.072 "adrfam": "IPv4", 00:18:38.072 "traddr": "10.0.0.1", 00:18:38.072 "trsvcid": "43584" 00:18:38.072 }, 00:18:38.072 "auth": { 00:18:38.072 "state": "completed", 00:18:38.072 "digest": "sha512", 00:18:38.072 "dhgroup": "ffdhe6144" 00:18:38.072 } 00:18:38.072 } 00:18:38.072 ]' 00:18:38.072 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.072 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.072 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.072 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.073 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.073 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.073 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.073 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.331 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:38.331 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:38.897 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.897 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.897 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.897 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.897 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.897 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.897 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:38.897 07:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:39.155 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:39.155 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.155 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.155 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:39.155 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:39.155 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.155 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.155 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.155 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.155 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.155 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.156 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.156 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.413 00:18:39.413 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.413 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.413 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.670 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.670 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.670 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.670 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.670 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.670 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.670 { 00:18:39.670 "cntlid": 133, 00:18:39.670 "qid": 0, 00:18:39.670 "state": "enabled", 00:18:39.670 "thread": "nvmf_tgt_poll_group_000", 00:18:39.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:39.670 "listen_address": { 00:18:39.670 "trtype": "TCP", 00:18:39.670 "adrfam": "IPv4", 00:18:39.670 "traddr": "10.0.0.2", 00:18:39.670 "trsvcid": "4420" 00:18:39.670 }, 00:18:39.670 "peer_address": { 00:18:39.670 "trtype": "TCP", 00:18:39.670 "adrfam": "IPv4", 00:18:39.670 "traddr": "10.0.0.1", 00:18:39.670 "trsvcid": "43608" 00:18:39.670 }, 00:18:39.670 "auth": { 00:18:39.670 "state": "completed", 00:18:39.670 "digest": "sha512", 00:18:39.670 "dhgroup": "ffdhe6144" 00:18:39.670 } 00:18:39.670 } 00:18:39.670 ]' 00:18:39.670 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.670 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.670 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.670 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:39.670 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.928 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.928 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.928 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.928 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:39.928 07:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:40.492 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.492 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:40.492 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.492 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.492 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.492 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.492 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:40.492 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.749 07:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.314 00:18:41.314 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.314 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.315 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.315 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.315 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.315 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.315 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.315 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.315 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.315 { 00:18:41.315 "cntlid": 135, 00:18:41.315 "qid": 0, 00:18:41.315 "state": "enabled", 00:18:41.315 "thread": "nvmf_tgt_poll_group_000", 00:18:41.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:41.315 "listen_address": { 00:18:41.315 "trtype": "TCP", 00:18:41.315 "adrfam": "IPv4", 00:18:41.315 "traddr": "10.0.0.2", 00:18:41.315 "trsvcid": "4420" 00:18:41.315 }, 00:18:41.315 "peer_address": { 00:18:41.315 "trtype": "TCP", 00:18:41.315 "adrfam": "IPv4", 00:18:41.315 "traddr": "10.0.0.1", 00:18:41.315 "trsvcid": "43636" 00:18:41.315 }, 00:18:41.315 "auth": { 00:18:41.315 "state": "completed", 00:18:41.315 "digest": "sha512", 00:18:41.315 "dhgroup": "ffdhe6144" 00:18:41.315 } 00:18:41.315 } 00:18:41.315 ]' 00:18:41.315 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.315 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.315 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.572 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:41.572 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.572 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.573 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.573 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.830 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:41.830 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.395 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.961 00:18:42.961 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.961 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.961 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.218 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.219 { 00:18:43.219 "cntlid": 137, 00:18:43.219 "qid": 0, 00:18:43.219 "state": "enabled", 00:18:43.219 "thread": "nvmf_tgt_poll_group_000", 00:18:43.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:43.219 "listen_address": { 00:18:43.219 "trtype": "TCP", 00:18:43.219 "adrfam": "IPv4", 00:18:43.219 "traddr": "10.0.0.2", 00:18:43.219 "trsvcid": "4420" 00:18:43.219 }, 00:18:43.219 "peer_address": { 00:18:43.219 "trtype": "TCP", 00:18:43.219 "adrfam": "IPv4", 00:18:43.219 "traddr": "10.0.0.1", 00:18:43.219 "trsvcid": "43658" 00:18:43.219 }, 00:18:43.219 "auth": { 00:18:43.219 "state": "completed", 00:18:43.219 "digest": "sha512", 00:18:43.219 "dhgroup": "ffdhe8192" 00:18:43.219 } 00:18:43.219 } 00:18:43.219 ]' 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.219 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.476 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:43.477 07:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:44.059 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.059 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:44.059 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.059 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.059 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.059 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.059 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:44.059 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.317 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.883 00:18:44.883 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.883 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.883 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.883 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.883 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.883 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.883 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.883 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.883 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.883 { 00:18:44.883 "cntlid": 139, 00:18:44.883 "qid": 0, 00:18:44.883 "state": "enabled", 00:18:44.883 "thread": "nvmf_tgt_poll_group_000", 00:18:44.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:44.883 "listen_address": { 00:18:44.883 "trtype": "TCP", 00:18:44.883 "adrfam": "IPv4", 00:18:44.883 "traddr": "10.0.0.2", 00:18:44.883 "trsvcid": "4420" 00:18:44.883 }, 00:18:44.883 "peer_address": { 00:18:44.883 "trtype": "TCP", 00:18:44.883 "adrfam": "IPv4", 00:18:44.883 "traddr": "10.0.0.1", 00:18:44.883 "trsvcid": "43686" 00:18:44.883 }, 00:18:44.883 "auth": { 00:18:44.883 "state": "completed", 00:18:44.883 "digest": "sha512", 00:18:44.883 "dhgroup": "ffdhe8192" 00:18:44.883 } 00:18:44.883 } 00:18:44.883 ]' 00:18:44.883 07:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.141 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.141 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.141 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.141 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.141 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.141 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.141 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.401 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:45.401 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: --dhchap-ctrl-secret DHHC-1:02:YmJkMjllYTA2Y2JiZjc3ZTZhZDViMGYxZjNhNjQxYWVhNzk2YTkxOGNhZmYyOTQzmFFZNQ==: 00:18:45.966 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.966 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:45.966 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.966 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.966 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.966 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.966 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:45.966 07:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.224 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.482 00:18:46.482 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.740 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.740 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.740 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.740 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.740 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.740 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.740 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.740 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.740 { 00:18:46.740 "cntlid": 141, 00:18:46.740 "qid": 0, 00:18:46.740 "state": "enabled", 00:18:46.740 "thread": "nvmf_tgt_poll_group_000", 00:18:46.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:46.740 "listen_address": { 00:18:46.740 "trtype": "TCP", 00:18:46.740 "adrfam": "IPv4", 00:18:46.740 "traddr": "10.0.0.2", 00:18:46.740 "trsvcid": "4420" 00:18:46.740 }, 00:18:46.740 "peer_address": { 00:18:46.740 "trtype": "TCP", 00:18:46.740 "adrfam": "IPv4", 00:18:46.740 "traddr": "10.0.0.1", 00:18:46.740 "trsvcid": "40896" 00:18:46.740 }, 00:18:46.740 "auth": { 00:18:46.740 "state": "completed", 00:18:46.740 "digest": "sha512", 00:18:46.740 "dhgroup": "ffdhe8192" 00:18:46.740 } 00:18:46.740 } 00:18:46.740 ]' 00:18:46.740 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.740 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.740 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.999 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.999 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.999 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.999 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.999 07:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.258 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:47.258 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:01:MmZhYjY4MTQ0MjJhOGNmOTQ0ZjExNGE0NzZhOGI3ZjecbrUD: 00:18:47.823 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.823 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.824 07:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.389 00:18:48.389 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.389 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.389 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.647 { 00:18:48.647 "cntlid": 143, 00:18:48.647 "qid": 0, 00:18:48.647 "state": "enabled", 00:18:48.647 "thread": "nvmf_tgt_poll_group_000", 00:18:48.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:48.647 "listen_address": { 00:18:48.647 "trtype": "TCP", 00:18:48.647 "adrfam": "IPv4", 00:18:48.647 "traddr": "10.0.0.2", 00:18:48.647 "trsvcid": "4420" 00:18:48.647 }, 00:18:48.647 "peer_address": { 00:18:48.647 "trtype": "TCP", 00:18:48.647 "adrfam": "IPv4", 00:18:48.647 "traddr": "10.0.0.1", 00:18:48.647 "trsvcid": "40918" 00:18:48.647 }, 00:18:48.647 "auth": { 00:18:48.647 "state": "completed", 00:18:48.647 "digest": "sha512", 00:18:48.647 "dhgroup": "ffdhe8192" 00:18:48.647 } 00:18:48.647 } 00:18:48.647 ]' 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.647 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.905 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:48.905 07:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:49.470 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.470 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:49.470 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.470 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.470 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.470 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:49.470 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:49.470 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:49.470 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:49.470 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:49.470 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.727 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.293 00:18:50.293 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.293 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.293 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.293 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.293 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.293 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.550 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.550 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.550 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.550 { 00:18:50.550 "cntlid": 145, 00:18:50.550 "qid": 0, 00:18:50.550 "state": "enabled", 00:18:50.550 "thread": "nvmf_tgt_poll_group_000", 00:18:50.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:50.550 "listen_address": { 00:18:50.550 "trtype": "TCP", 00:18:50.550 "adrfam": "IPv4", 00:18:50.550 "traddr": "10.0.0.2", 00:18:50.550 "trsvcid": "4420" 00:18:50.550 }, 00:18:50.550 "peer_address": { 00:18:50.550 "trtype": "TCP", 00:18:50.550 "adrfam": "IPv4", 00:18:50.550 "traddr": "10.0.0.1", 00:18:50.550 "trsvcid": "40932" 00:18:50.550 }, 00:18:50.550 "auth": { 00:18:50.550 "state": "completed", 00:18:50.551 "digest": "sha512", 00:18:50.551 "dhgroup": "ffdhe8192" 00:18:50.551 } 00:18:50.551 } 00:18:50.551 ]' 00:18:50.551 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.551 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.551 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.551 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:50.551 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.551 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.551 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.551 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.808 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:50.809 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDU0M2ZmZmNlNzViN2FkZWU4Mjg2ZjdkYzhmMDk4NjZhYWJmOGYwZjU0MWYyNjYyyeyJqw==: --dhchap-ctrl-secret DHHC-1:03:ZGJjMDU0MGVhMTgxMWRkNGFmNjJjMGQ0NWRhZjYzYjJjOThkMGU4OGI1NTI1NmIxYzQwNGE4M2E1YmYxYmI5MrCtGRM=: 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:51.373 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:51.939 request: 00:18:51.939 { 00:18:51.939 "name": "nvme0", 00:18:51.939 "trtype": "tcp", 00:18:51.939 "traddr": "10.0.0.2", 00:18:51.939 "adrfam": "ipv4", 00:18:51.939 "trsvcid": "4420", 00:18:51.939 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:51.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:51.940 "prchk_reftag": false, 00:18:51.940 "prchk_guard": false, 00:18:51.940 "hdgst": false, 00:18:51.940 "ddgst": false, 00:18:51.940 "dhchap_key": "key2", 00:18:51.940 "allow_unrecognized_csi": false, 00:18:51.940 "method": "bdev_nvme_attach_controller", 00:18:51.940 "req_id": 1 00:18:51.940 } 00:18:51.940 Got JSON-RPC error response 00:18:51.940 response: 00:18:51.940 { 00:18:51.940 "code": -5, 00:18:51.940 "message": "Input/output error" 00:18:51.940 } 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:51.940 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:52.198 request: 00:18:52.198 { 00:18:52.198 "name": "nvme0", 00:18:52.198 "trtype": "tcp", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "4420", 00:18:52.198 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:52.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:52.198 "prchk_reftag": false, 00:18:52.198 "prchk_guard": false, 00:18:52.198 "hdgst": false, 00:18:52.198 "ddgst": false, 00:18:52.198 "dhchap_key": "key1", 00:18:52.198 "dhchap_ctrlr_key": "ckey2", 00:18:52.198 "allow_unrecognized_csi": false, 00:18:52.198 "method": "bdev_nvme_attach_controller", 00:18:52.198 "req_id": 1 00:18:52.198 } 00:18:52.198 Got JSON-RPC error response 00:18:52.198 response: 00:18:52.198 { 00:18:52.198 "code": -5, 00:18:52.198 "message": "Input/output error" 00:18:52.198 } 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.198 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.764 request: 00:18:52.764 { 00:18:52.764 "name": "nvme0", 00:18:52.764 "trtype": "tcp", 00:18:52.764 "traddr": "10.0.0.2", 00:18:52.764 "adrfam": "ipv4", 00:18:52.764 "trsvcid": "4420", 00:18:52.764 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:52.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:52.764 "prchk_reftag": false, 00:18:52.764 "prchk_guard": false, 00:18:52.764 "hdgst": false, 00:18:52.764 "ddgst": false, 00:18:52.764 "dhchap_key": "key1", 00:18:52.764 "dhchap_ctrlr_key": "ckey1", 00:18:52.764 "allow_unrecognized_csi": false, 00:18:52.764 "method": "bdev_nvme_attach_controller", 00:18:52.764 "req_id": 1 00:18:52.764 } 00:18:52.764 Got JSON-RPC error response 00:18:52.764 response: 00:18:52.764 { 00:18:52.764 "code": -5, 00:18:52.764 "message": "Input/output error" 00:18:52.764 } 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 718212 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 718212 ']' 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 718212 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 718212 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 718212' 00:18:52.764 killing process with pid 718212 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 718212 00:18:52.764 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 718212 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=740236 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 740236 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 740236 ']' 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.022 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 740236 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 740236 ']' 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.280 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.539 null0 00:18:53.539 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.539 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.539 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CFl 00:18:53.539 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.539 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.539 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.539 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.bsY ]] 00:18:53.539 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bsY 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YJ8 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.SNF ]] 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SNF 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ODq 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.7sE ]] 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7sE 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SQ5 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.540 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.476 nvme0n1 00:18:54.476 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.476 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.476 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.476 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.476 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.476 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.476 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.476 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.476 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.476 { 00:18:54.476 "cntlid": 1, 00:18:54.476 "qid": 0, 00:18:54.476 "state": "enabled", 00:18:54.476 "thread": "nvmf_tgt_poll_group_000", 00:18:54.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:54.476 "listen_address": { 00:18:54.476 "trtype": "TCP", 00:18:54.476 "adrfam": "IPv4", 00:18:54.476 "traddr": "10.0.0.2", 00:18:54.476 "trsvcid": "4420" 00:18:54.476 }, 00:18:54.476 "peer_address": { 00:18:54.476 "trtype": "TCP", 00:18:54.476 "adrfam": "IPv4", 00:18:54.476 "traddr": "10.0.0.1", 00:18:54.476 "trsvcid": "41014" 00:18:54.477 }, 00:18:54.477 "auth": { 00:18:54.477 "state": "completed", 00:18:54.477 "digest": "sha512", 00:18:54.477 "dhgroup": "ffdhe8192" 00:18:54.477 } 00:18:54.477 } 00:18:54.477 ]' 00:18:54.477 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.477 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.477 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.736 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.736 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.736 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.736 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.736 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.736 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:54.736 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:55.304 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.564 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.823 request: 00:18:55.823 { 00:18:55.823 "name": "nvme0", 00:18:55.823 "trtype": "tcp", 00:18:55.823 "traddr": "10.0.0.2", 00:18:55.824 "adrfam": "ipv4", 00:18:55.824 "trsvcid": "4420", 00:18:55.824 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:55.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:55.824 "prchk_reftag": false, 00:18:55.824 "prchk_guard": false, 00:18:55.824 "hdgst": false, 00:18:55.824 "ddgst": false, 00:18:55.824 "dhchap_key": "key3", 00:18:55.824 "allow_unrecognized_csi": false, 00:18:55.824 "method": "bdev_nvme_attach_controller", 00:18:55.824 "req_id": 1 00:18:55.824 } 00:18:55.824 Got JSON-RPC error response 00:18:55.824 response: 00:18:55.824 { 00:18:55.824 "code": -5, 00:18:55.824 "message": "Input/output error" 00:18:55.824 } 00:18:55.824 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:55.824 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.824 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.824 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.824 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:55.824 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:55.824 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:55.824 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:56.083 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:56.083 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:56.083 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:56.083 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:56.083 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.083 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:56.083 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.083 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:56.083 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.083 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.341 request: 00:18:56.341 { 00:18:56.341 "name": "nvme0", 00:18:56.341 "trtype": "tcp", 00:18:56.341 "traddr": "10.0.0.2", 00:18:56.341 "adrfam": "ipv4", 00:18:56.341 "trsvcid": "4420", 00:18:56.341 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:56.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:56.341 "prchk_reftag": false, 00:18:56.341 "prchk_guard": false, 00:18:56.341 "hdgst": false, 00:18:56.341 "ddgst": false, 00:18:56.341 "dhchap_key": "key3", 00:18:56.341 "allow_unrecognized_csi": false, 00:18:56.341 "method": "bdev_nvme_attach_controller", 00:18:56.341 "req_id": 1 00:18:56.341 } 00:18:56.341 Got JSON-RPC error response 00:18:56.341 response: 00:18:56.341 { 00:18:56.341 "code": -5, 00:18:56.341 "message": "Input/output error" 00:18:56.341 } 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.341 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.599 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.599 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.599 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.599 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.599 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.599 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.599 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:56.599 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:56.599 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:56.599 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:56.599 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.600 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:56.600 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:56.600 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:56.600 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:56.600 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:56.858 request: 00:18:56.858 { 00:18:56.858 "name": "nvme0", 00:18:56.858 "trtype": "tcp", 00:18:56.858 "traddr": "10.0.0.2", 00:18:56.858 "adrfam": "ipv4", 00:18:56.858 "trsvcid": "4420", 00:18:56.858 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:56.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:56.858 "prchk_reftag": false, 00:18:56.858 "prchk_guard": false, 00:18:56.858 "hdgst": false, 00:18:56.858 "ddgst": false, 00:18:56.858 "dhchap_key": "key0", 00:18:56.858 "dhchap_ctrlr_key": "key1", 00:18:56.858 "allow_unrecognized_csi": false, 00:18:56.858 "method": "bdev_nvme_attach_controller", 00:18:56.858 "req_id": 1 00:18:56.858 } 00:18:56.858 Got JSON-RPC error response 00:18:56.858 response: 00:18:56.858 { 00:18:56.858 "code": -5, 00:18:56.858 "message": "Input/output error" 00:18:56.858 } 00:18:56.858 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:56.858 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:56.858 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:56.858 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:56.858 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:56.858 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:56.858 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:57.117 nvme0n1 00:18:57.117 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:57.117 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:57.117 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.376 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.376 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.376 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.635 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:57.635 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.635 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.635 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.635 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:57.635 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:57.635 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:58.202 nvme0n1 00:18:58.202 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:58.202 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:58.202 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.460 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.460 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:58.460 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.460 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.460 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.460 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:58.460 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.460 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:58.718 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.718 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:58.718 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: --dhchap-ctrl-secret DHHC-1:03:NzBlMjJmMTY2ODM4MDIxZjE2YTIzZjg1MWQyNGE0MjJlNTIwZmVlMDkzNjk2N2I2OWZhODBlZTZhZjE3YmQ3YySixhY=: 00:18:59.286 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:59.286 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:59.286 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:59.286 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:59.286 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:59.286 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:59.286 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:59.286 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.286 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.544 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:59.544 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:59.544 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:59.544 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:59.544 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.544 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:59.544 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.544 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:59.544 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:59.544 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:59.801 request: 00:18:59.801 { 00:18:59.801 "name": "nvme0", 00:18:59.801 "trtype": "tcp", 00:18:59.801 "traddr": "10.0.0.2", 00:18:59.801 "adrfam": "ipv4", 00:18:59.801 "trsvcid": "4420", 00:18:59.801 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:59.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:59.801 "prchk_reftag": false, 00:18:59.801 "prchk_guard": false, 00:18:59.801 "hdgst": false, 00:18:59.801 "ddgst": false, 00:18:59.801 "dhchap_key": "key1", 00:18:59.801 "allow_unrecognized_csi": false, 00:18:59.801 "method": "bdev_nvme_attach_controller", 00:18:59.801 "req_id": 1 00:18:59.801 } 00:18:59.801 Got JSON-RPC error response 00:18:59.801 response: 00:18:59.801 { 00:18:59.801 "code": -5, 00:18:59.801 "message": "Input/output error" 00:18:59.801 } 00:18:59.801 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:59.801 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:59.801 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:59.801 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:59.801 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:59.801 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:59.801 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:00.734 nvme0n1 00:19:00.734 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:00.734 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:00.734 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.734 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.734 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.734 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.994 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:00.994 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.994 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.994 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.994 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:00.994 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:00.994 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:01.256 nvme0n1 00:19:01.256 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:01.256 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:01.256 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.515 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.515 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.515 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: '' 2s 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: ]] 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NGI4MDU0MTNmNjdjZTYxYjRmMWNiMjU0ZGFmNTg4ZjPhRKWM: 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:01.773 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: 2s 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: ]] 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGZkZDc5ZWVjNzdmZTE3MDA0YzFiMWZmYTc3ODFjODMxMWFjOGEwZjYzYWE3ZGI4QybcpA==: 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:03.673 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:06.199 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:06.766 nvme0n1 00:19:06.766 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:06.766 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.766 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.766 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.766 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:06.766 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:07.024 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:07.024 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:07.024 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.282 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.282 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:07.282 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.282 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.282 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.282 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:07.283 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:07.541 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:07.541 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:07.541 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.800 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:07.801 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:08.059 request: 00:19:08.059 { 00:19:08.059 "name": "nvme0", 00:19:08.059 "dhchap_key": "key1", 00:19:08.059 "dhchap_ctrlr_key": "key3", 00:19:08.059 "method": "bdev_nvme_set_keys", 00:19:08.059 "req_id": 1 00:19:08.059 } 00:19:08.059 Got JSON-RPC error response 00:19:08.059 response: 00:19:08.059 { 00:19:08.060 "code": -13, 00:19:08.060 "message": "Permission denied" 00:19:08.060 } 00:19:08.060 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:08.060 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:08.060 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:08.060 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:08.060 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:08.060 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:08.060 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.318 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:08.318 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:09.254 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:09.254 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:09.254 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.513 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:09.513 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:09.513 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.513 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.513 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.513 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:09.513 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:09.513 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:10.450 nvme0n1 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:10.450 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:10.709 request: 00:19:10.709 { 00:19:10.709 "name": "nvme0", 00:19:10.709 "dhchap_key": "key2", 00:19:10.709 "dhchap_ctrlr_key": "key0", 00:19:10.709 "method": "bdev_nvme_set_keys", 00:19:10.709 "req_id": 1 00:19:10.709 } 00:19:10.709 Got JSON-RPC error response 00:19:10.709 response: 00:19:10.709 { 00:19:10.709 "code": -13, 00:19:10.709 "message": "Permission denied" 00:19:10.709 } 00:19:10.709 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:10.709 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:10.709 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:10.709 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:10.709 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:10.709 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:10.709 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.968 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:10.968 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:11.904 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:11.904 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:11.904 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 718232 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 718232 ']' 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 718232 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 718232 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 718232' 00:19:12.163 killing process with pid 718232 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 718232 00:19:12.163 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 718232 00:19:12.420 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:12.421 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:12.421 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:12.421 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:12.421 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:12.421 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:12.421 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:12.421 rmmod nvme_tcp 00:19:12.678 rmmod nvme_fabrics 00:19:12.678 rmmod nvme_keyring 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 740236 ']' 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 740236 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 740236 ']' 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 740236 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 740236 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 740236' 00:19:12.678 killing process with pid 740236 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 740236 00:19:12.678 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 740236 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.937 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.838 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:14.838 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.CFl /tmp/spdk.key-sha256.YJ8 /tmp/spdk.key-sha384.ODq /tmp/spdk.key-sha512.SQ5 /tmp/spdk.key-sha512.bsY /tmp/spdk.key-sha384.SNF /tmp/spdk.key-sha256.7sE '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:14.838 00:19:14.838 real 2m31.560s 00:19:14.838 user 5m49.517s 00:19:14.838 sys 0m23.867s 00:19:14.838 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.838 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.838 ************************************ 00:19:14.838 END TEST nvmf_auth_target 00:19:14.838 ************************************ 00:19:14.838 07:28:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:14.838 07:28:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:14.838 07:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:14.838 07:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.838 07:28:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:15.097 ************************************ 00:19:15.097 START TEST nvmf_bdevio_no_huge 00:19:15.097 ************************************ 00:19:15.097 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:15.097 * Looking for test storage... 00:19:15.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:15.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.097 --rc genhtml_branch_coverage=1 00:19:15.097 --rc genhtml_function_coverage=1 00:19:15.097 --rc genhtml_legend=1 00:19:15.097 --rc geninfo_all_blocks=1 00:19:15.097 --rc geninfo_unexecuted_blocks=1 00:19:15.097 00:19:15.097 ' 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:15.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.097 --rc genhtml_branch_coverage=1 00:19:15.097 --rc genhtml_function_coverage=1 00:19:15.097 --rc genhtml_legend=1 00:19:15.097 --rc geninfo_all_blocks=1 00:19:15.097 --rc geninfo_unexecuted_blocks=1 00:19:15.097 00:19:15.097 ' 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:15.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.097 --rc genhtml_branch_coverage=1 00:19:15.097 --rc genhtml_function_coverage=1 00:19:15.097 --rc genhtml_legend=1 00:19:15.097 --rc geninfo_all_blocks=1 00:19:15.097 --rc geninfo_unexecuted_blocks=1 00:19:15.097 00:19:15.097 ' 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:15.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.097 --rc genhtml_branch_coverage=1 00:19:15.097 --rc genhtml_function_coverage=1 00:19:15.097 --rc genhtml_legend=1 00:19:15.097 --rc geninfo_all_blocks=1 00:19:15.097 --rc geninfo_unexecuted_blocks=1 00:19:15.097 00:19:15.097 ' 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.097 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:15.098 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:21.665 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:21.665 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.665 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:21.666 Found net devices under 0000:86:00.0: cvl_0_0 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:21.666 Found net devices under 0000:86:00.1: cvl_0_1 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:21.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:19:21.666 00:19:21.666 --- 10.0.0.2 ping statistics --- 00:19:21.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.666 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:19:21.666 00:19:21.666 --- 10.0.0.1 ping statistics --- 00:19:21.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.666 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=746926 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 746926 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 746926 ']' 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.666 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.666 [2024-11-26 07:28:48.878236] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:19:21.666 [2024-11-26 07:28:48.878284] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:21.666 [2024-11-26 07:28:48.950860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.666 [2024-11-26 07:28:48.999590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.666 [2024-11-26 07:28:48.999624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.666 [2024-11-26 07:28:48.999631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.666 [2024-11-26 07:28:48.999637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.666 [2024-11-26 07:28:48.999641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.666 [2024-11-26 07:28:49.000841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:21.666 [2024-11-26 07:28:49.000964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:21.666 [2024-11-26 07:28:49.001054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:21.666 [2024-11-26 07:28:49.001053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.666 [2024-11-26 07:28:49.159029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.666 Malloc0 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.666 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.667 [2024-11-26 07:28:49.203337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:21.667 { 00:19:21.667 "params": { 00:19:21.667 "name": "Nvme$subsystem", 00:19:21.667 "trtype": "$TEST_TRANSPORT", 00:19:21.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.667 "adrfam": "ipv4", 00:19:21.667 "trsvcid": "$NVMF_PORT", 00:19:21.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.667 "hdgst": ${hdgst:-false}, 00:19:21.667 "ddgst": ${ddgst:-false} 00:19:21.667 }, 00:19:21.667 "method": "bdev_nvme_attach_controller" 00:19:21.667 } 00:19:21.667 EOF 00:19:21.667 )") 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:21.667 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:21.667 "params": { 00:19:21.667 "name": "Nvme1", 00:19:21.667 "trtype": "tcp", 00:19:21.667 "traddr": "10.0.0.2", 00:19:21.667 "adrfam": "ipv4", 00:19:21.667 "trsvcid": "4420", 00:19:21.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.667 "hdgst": false, 00:19:21.667 "ddgst": false 00:19:21.667 }, 00:19:21.667 "method": "bdev_nvme_attach_controller" 00:19:21.667 }' 00:19:21.667 [2024-11-26 07:28:49.255128] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:19:21.667 [2024-11-26 07:28:49.255175] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid747152 ] 00:19:21.667 [2024-11-26 07:28:49.322008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.667 [2024-11-26 07:28:49.371638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.667 [2024-11-26 07:28:49.371657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.667 [2024-11-26 07:28:49.371659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.667 I/O targets: 00:19:21.667 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:21.667 00:19:21.667 00:19:21.667 CUnit - A unit testing framework for C - Version 2.1-3 00:19:21.667 http://cunit.sourceforge.net/ 00:19:21.667 00:19:21.667 00:19:21.667 Suite: bdevio tests on: Nvme1n1 00:19:21.667 Test: blockdev write read block ...passed 00:19:21.667 Test: blockdev write zeroes read block ...passed 00:19:21.667 Test: blockdev write zeroes read no split ...passed 00:19:21.667 Test: blockdev write zeroes read split ...passed 00:19:21.667 Test: blockdev write zeroes read split partial ...passed 00:19:21.667 Test: blockdev reset ...[2024-11-26 07:28:49.661817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:21.667 [2024-11-26 07:28:49.661880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f4920 (9): Bad file descriptor 00:19:21.925 [2024-11-26 07:28:49.811426] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:21.925 passed 00:19:21.925 Test: blockdev write read 8 blocks ...passed 00:19:21.925 Test: blockdev write read size > 128k ...passed 00:19:21.925 Test: blockdev write read invalid size ...passed 00:19:21.925 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:21.925 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:21.925 Test: blockdev write read max offset ...passed 00:19:21.925 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:21.925 Test: blockdev writev readv 8 blocks ...passed 00:19:21.925 Test: blockdev writev readv 30 x 1block ...passed 00:19:21.925 Test: blockdev writev readv block ...passed 00:19:22.183 Test: blockdev writev readv size > 128k ...passed 00:19:22.183 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.183 Test: blockdev comparev and writev ...[2024-11-26 07:28:50.021897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.183 [2024-11-26 07:28:50.021925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:22.183 [2024-11-26 07:28:50.021940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.183 [2024-11-26 07:28:50.021958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.183 [2024-11-26 07:28:50.022225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.183 [2024-11-26 07:28:50.022235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:22.183 [2024-11-26 07:28:50.022247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.183 [2024-11-26 07:28:50.022254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:22.183 [2024-11-26 07:28:50.022498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.183 [2024-11-26 07:28:50.022508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:22.183 [2024-11-26 07:28:50.022520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.183 [2024-11-26 07:28:50.022527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:22.183 [2024-11-26 07:28:50.022769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.183 [2024-11-26 07:28:50.022780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:22.183 [2024-11-26 07:28:50.022792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.183 [2024-11-26 07:28:50.022800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:22.183 passed 00:19:22.183 Test: blockdev nvme passthru rw ...passed 00:19:22.183 Test: blockdev nvme passthru vendor specific ...[2024-11-26 07:28:50.104347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.183 [2024-11-26 07:28:50.104365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:22.183 [2024-11-26 07:28:50.104473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.183 [2024-11-26 07:28:50.104483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:22.183 [2024-11-26 07:28:50.104587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.183 [2024-11-26 07:28:50.104597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:22.183 [2024-11-26 07:28:50.104695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.183 [2024-11-26 07:28:50.104705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:22.183 passed 00:19:22.183 Test: blockdev nvme admin passthru ...passed 00:19:22.183 Test: blockdev copy ...passed 00:19:22.183 00:19:22.183 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.183 suites 1 1 n/a 0 0 00:19:22.183 tests 23 23 23 0 0 00:19:22.183 asserts 152 152 152 0 n/a 00:19:22.183 00:19:22.183 Elapsed time = 1.246 seconds 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:22.456 rmmod nvme_tcp 00:19:22.456 rmmod nvme_fabrics 00:19:22.456 rmmod nvme_keyring 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:22.456 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:22.457 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:22.457 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 746926 ']' 00:19:22.457 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 746926 00:19:22.457 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 746926 ']' 00:19:22.457 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 746926 00:19:22.457 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:22.457 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.457 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 746926 00:19:22.812 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 746926' 00:19:22.813 killing process with pid 746926 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 746926 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 746926 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.813 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.484 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:25.484 00:19:25.484 real 0m10.007s 00:19:25.484 user 0m10.990s 00:19:25.484 sys 0m5.168s 00:19:25.484 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.484 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:25.484 ************************************ 00:19:25.484 END TEST nvmf_bdevio_no_huge 00:19:25.484 ************************************ 00:19:25.484 07:28:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:25.484 07:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:25.484 07:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.484 07:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:25.484 ************************************ 00:19:25.484 START TEST nvmf_tls 00:19:25.484 ************************************ 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:25.484 * Looking for test storage... 00:19:25.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.484 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:25.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.484 --rc genhtml_branch_coverage=1 00:19:25.484 --rc genhtml_function_coverage=1 00:19:25.484 --rc genhtml_legend=1 00:19:25.485 --rc geninfo_all_blocks=1 00:19:25.485 --rc geninfo_unexecuted_blocks=1 00:19:25.485 00:19:25.485 ' 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:25.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.485 --rc genhtml_branch_coverage=1 00:19:25.485 --rc genhtml_function_coverage=1 00:19:25.485 --rc genhtml_legend=1 00:19:25.485 --rc geninfo_all_blocks=1 00:19:25.485 --rc geninfo_unexecuted_blocks=1 00:19:25.485 00:19:25.485 ' 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:25.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.485 --rc genhtml_branch_coverage=1 00:19:25.485 --rc genhtml_function_coverage=1 00:19:25.485 --rc genhtml_legend=1 00:19:25.485 --rc geninfo_all_blocks=1 00:19:25.485 --rc geninfo_unexecuted_blocks=1 00:19:25.485 00:19:25.485 ' 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:25.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.485 --rc genhtml_branch_coverage=1 00:19:25.485 --rc genhtml_function_coverage=1 00:19:25.485 --rc genhtml_legend=1 00:19:25.485 --rc geninfo_all_blocks=1 00:19:25.485 --rc geninfo_unexecuted_blocks=1 00:19:25.485 00:19:25.485 ' 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:25.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:25.485 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:30.954 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:30.954 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:30.954 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:30.955 Found net devices under 0000:86:00.0: cvl_0_0 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:30.955 Found net devices under 0000:86:00.1: cvl_0_1 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:30.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:19:30.955 00:19:30.955 --- 10.0.0.2 ping statistics --- 00:19:30.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.955 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:19:30.955 00:19:30.955 --- 10.0.0.1 ping statistics --- 00:19:30.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.955 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=750841 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 750841 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 750841 ']' 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.955 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.955 [2024-11-26 07:28:58.850001] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:19:30.955 [2024-11-26 07:28:58.850051] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.955 [2024-11-26 07:28:58.919346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.955 [2024-11-26 07:28:58.961880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.955 [2024-11-26 07:28:58.961916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.955 [2024-11-26 07:28:58.961923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.955 [2024-11-26 07:28:58.961929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.956 [2024-11-26 07:28:58.961934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.956 [2024-11-26 07:28:58.962534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.956 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.956 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:30.956 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:30.956 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.956 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.956 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.956 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:30.956 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:31.215 true 00:19:31.215 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:31.215 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:31.475 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:31.475 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:31.475 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:31.733 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:31.733 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:31.733 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:31.733 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:31.733 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:31.992 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:31.992 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:32.250 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:32.250 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:32.250 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.250 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:32.509 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:32.509 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:32.509 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:32.509 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.509 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:32.768 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:32.768 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:32.768 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:33.026 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.026 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:33.285 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:33.286 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:33.286 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.UzWW5fhBhk 00:19:33.286 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:33.286 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.i4ib7ovh3V 00:19:33.286 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:33.286 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:33.286 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.UzWW5fhBhk 00:19:33.286 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.i4ib7ovh3V 00:19:33.286 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:33.545 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:33.803 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.UzWW5fhBhk 00:19:33.803 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UzWW5fhBhk 00:19:33.803 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.803 [2024-11-26 07:29:01.868538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.803 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:34.062 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:34.321 [2024-11-26 07:29:02.249555] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.321 [2024-11-26 07:29:02.249774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.321 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:34.581 malloc0 00:19:34.581 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:34.581 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UzWW5fhBhk 00:19:34.840 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:35.098 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.UzWW5fhBhk 00:19:45.074 Initializing NVMe Controllers 00:19:45.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:45.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:45.074 Initialization complete. Launching workers. 00:19:45.074 ======================================================== 00:19:45.074 Latency(us) 00:19:45.074 Device Information : IOPS MiB/s Average min max 00:19:45.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16363.48 63.92 3911.23 836.81 5220.60 00:19:45.074 ======================================================== 00:19:45.074 Total : 16363.48 63.92 3911.23 836.81 5220.60 00:19:45.074 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UzWW5fhBhk 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UzWW5fhBhk 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=753274 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 753274 /var/tmp/bdevperf.sock 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 753274 ']' 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.074 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.074 [2024-11-26 07:29:13.165034] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:19:45.074 [2024-11-26 07:29:13.165087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid753274 ] 00:19:45.333 [2024-11-26 07:29:13.222444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.333 [2024-11-26 07:29:13.264804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.333 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.333 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.333 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UzWW5fhBhk 00:19:45.592 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.851 [2024-11-26 07:29:13.720514] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.851 TLSTESTn1 00:19:45.851 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:45.851 Running I/O for 10 seconds... 00:19:48.162 5435.00 IOPS, 21.23 MiB/s [2024-11-26T06:29:17.199Z] 5467.00 IOPS, 21.36 MiB/s [2024-11-26T06:29:18.134Z] 5471.33 IOPS, 21.37 MiB/s [2024-11-26T06:29:19.068Z] 5450.00 IOPS, 21.29 MiB/s [2024-11-26T06:29:20.004Z] 5470.60 IOPS, 21.37 MiB/s [2024-11-26T06:29:20.939Z] 5477.50 IOPS, 21.40 MiB/s [2024-11-26T06:29:22.315Z] 5475.29 IOPS, 21.39 MiB/s [2024-11-26T06:29:23.251Z] 5481.12 IOPS, 21.41 MiB/s [2024-11-26T06:29:24.193Z] 5481.67 IOPS, 21.41 MiB/s [2024-11-26T06:29:24.193Z] 5487.60 IOPS, 21.44 MiB/s 00:19:56.093 Latency(us) 00:19:56.093 [2024-11-26T06:29:24.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.093 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:56.093 Verification LBA range: start 0x0 length 0x2000 00:19:56.093 TLSTESTn1 : 10.01 5492.99 21.46 0.00 0.00 23267.83 4843.97 23023.08 00:19:56.093 [2024-11-26T06:29:24.193Z] =================================================================================================================== 00:19:56.093 [2024-11-26T06:29:24.193Z] Total : 5492.99 21.46 0.00 0.00 23267.83 4843.97 23023.08 00:19:56.093 { 00:19:56.093 "results": [ 00:19:56.093 { 00:19:56.093 "job": "TLSTESTn1", 00:19:56.093 "core_mask": "0x4", 00:19:56.093 "workload": "verify", 00:19:56.093 "status": "finished", 00:19:56.093 "verify_range": { 00:19:56.093 "start": 0, 00:19:56.093 "length": 8192 00:19:56.093 }, 00:19:56.093 "queue_depth": 128, 00:19:56.093 "io_size": 4096, 00:19:56.093 "runtime": 10.013121, 00:19:56.093 "iops": 5492.992644351347, 00:19:56.093 "mibps": 21.45700251699745, 00:19:56.093 "io_failed": 0, 00:19:56.093 "io_timeout": 0, 00:19:56.093 "avg_latency_us": 23267.82726319833, 00:19:56.093 "min_latency_us": 4843.965217391305, 00:19:56.093 "max_latency_us": 23023.081739130434 00:19:56.093 } 00:19:56.093 ], 00:19:56.093 "core_count": 1 00:19:56.093 } 00:19:56.093 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.093 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 753274 00:19:56.093 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 753274 ']' 00:19:56.093 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 753274 00:19:56.093 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:56.093 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.093 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 753274 00:19:56.094 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:56.094 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:56.094 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 753274' 00:19:56.094 killing process with pid 753274 00:19:56.094 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 753274 00:19:56.094 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.094 00:19:56.094 Latency(us) 00:19:56.094 [2024-11-26T06:29:24.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.094 [2024-11-26T06:29:24.194Z] =================================================================================================================== 00:19:56.094 [2024-11-26T06:29:24.194Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.094 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 753274 00:19:56.094 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i4ib7ovh3V 00:19:56.094 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:56.094 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i4ib7ovh3V 00:19:56.094 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:56.094 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.094 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i4ib7ovh3V 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.i4ib7ovh3V 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=755011 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 755011 /var/tmp/bdevperf.sock 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 755011 ']' 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.353 [2024-11-26 07:29:24.236552] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:19:56.353 [2024-11-26 07:29:24.236607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755011 ] 00:19:56.353 [2024-11-26 07:29:24.296456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.353 [2024-11-26 07:29:24.338504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:56.353 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.i4ib7ovh3V 00:19:56.612 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:56.870 [2024-11-26 07:29:24.777935] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.870 [2024-11-26 07:29:24.782787] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:56.870 [2024-11-26 07:29:24.783419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba3190 (107): Transport endpoint is not connected 00:19:56.870 [2024-11-26 07:29:24.784412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba3190 (9): Bad file descriptor 00:19:56.870 [2024-11-26 07:29:24.785413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:56.870 [2024-11-26 07:29:24.785422] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:56.870 [2024-11-26 07:29:24.785433] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:56.870 [2024-11-26 07:29:24.785443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:56.870 request: 00:19:56.870 { 00:19:56.870 "name": "TLSTEST", 00:19:56.870 "trtype": "tcp", 00:19:56.870 "traddr": "10.0.0.2", 00:19:56.870 "adrfam": "ipv4", 00:19:56.870 "trsvcid": "4420", 00:19:56.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.870 "prchk_reftag": false, 00:19:56.870 "prchk_guard": false, 00:19:56.870 "hdgst": false, 00:19:56.870 "ddgst": false, 00:19:56.870 "psk": "key0", 00:19:56.871 "allow_unrecognized_csi": false, 00:19:56.871 "method": "bdev_nvme_attach_controller", 00:19:56.871 "req_id": 1 00:19:56.871 } 00:19:56.871 Got JSON-RPC error response 00:19:56.871 response: 00:19:56.871 { 00:19:56.871 "code": -5, 00:19:56.871 "message": "Input/output error" 00:19:56.871 } 00:19:56.871 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 755011 00:19:56.871 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 755011 ']' 00:19:56.871 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 755011 00:19:56.871 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:56.871 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.871 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755011 00:19:56.871 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:56.871 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:56.871 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755011' 00:19:56.871 killing process with pid 755011 00:19:56.871 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 755011 00:19:56.871 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.871 00:19:56.871 Latency(us) 00:19:56.871 [2024-11-26T06:29:24.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.871 [2024-11-26T06:29:24.971Z] =================================================================================================================== 00:19:56.871 [2024-11-26T06:29:24.971Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.871 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 755011 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UzWW5fhBhk 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UzWW5fhBhk 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UzWW5fhBhk 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UzWW5fhBhk 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=755133 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 755133 /var/tmp/bdevperf.sock 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 755133 ']' 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.130 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.130 [2024-11-26 07:29:25.063198] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:19:57.130 [2024-11-26 07:29:25.063247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755133 ] 00:19:57.130 [2024-11-26 07:29:25.121371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.130 [2024-11-26 07:29:25.160752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.389 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.389 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:57.389 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UzWW5fhBhk 00:19:57.389 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:57.648 [2024-11-26 07:29:25.619886] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.648 [2024-11-26 07:29:25.624518] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:57.648 [2024-11-26 07:29:25.624540] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:57.648 [2024-11-26 07:29:25.624564] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:57.648 [2024-11-26 07:29:25.625246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ca190 (107): Transport endpoint is not connected 00:19:57.648 [2024-11-26 07:29:25.626238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ca190 (9): Bad file descriptor 00:19:57.648 [2024-11-26 07:29:25.627240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:57.648 [2024-11-26 07:29:25.627249] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:57.648 [2024-11-26 07:29:25.627257] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:57.648 [2024-11-26 07:29:25.627267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:57.648 request: 00:19:57.648 { 00:19:57.648 "name": "TLSTEST", 00:19:57.648 "trtype": "tcp", 00:19:57.648 "traddr": "10.0.0.2", 00:19:57.648 "adrfam": "ipv4", 00:19:57.648 "trsvcid": "4420", 00:19:57.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.648 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:57.648 "prchk_reftag": false, 00:19:57.648 "prchk_guard": false, 00:19:57.648 "hdgst": false, 00:19:57.648 "ddgst": false, 00:19:57.648 "psk": "key0", 00:19:57.648 "allow_unrecognized_csi": false, 00:19:57.648 "method": "bdev_nvme_attach_controller", 00:19:57.648 "req_id": 1 00:19:57.648 } 00:19:57.648 Got JSON-RPC error response 00:19:57.648 response: 00:19:57.648 { 00:19:57.648 "code": -5, 00:19:57.648 "message": "Input/output error" 00:19:57.648 } 00:19:57.648 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 755133 00:19:57.648 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 755133 ']' 00:19:57.648 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 755133 00:19:57.648 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:57.648 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.648 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755133 00:19:57.648 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:57.648 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:57.648 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755133' 00:19:57.648 killing process with pid 755133 00:19:57.648 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 755133 00:19:57.648 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.648 00:19:57.648 Latency(us) 00:19:57.648 [2024-11-26T06:29:25.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.648 [2024-11-26T06:29:25.748Z] =================================================================================================================== 00:19:57.648 [2024-11-26T06:29:25.748Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:57.648 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 755133 00:19:57.907 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:57.907 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:57.907 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:57.907 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:57.907 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UzWW5fhBhk 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UzWW5fhBhk 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UzWW5fhBhk 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UzWW5fhBhk 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=755362 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 755362 /var/tmp/bdevperf.sock 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 755362 ']' 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.908 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.908 [2024-11-26 07:29:25.892084] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:19:57.908 [2024-11-26 07:29:25.892134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755362 ] 00:19:57.908 [2024-11-26 07:29:25.950439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.908 [2024-11-26 07:29:25.987612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.167 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.167 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:58.167 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UzWW5fhBhk 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:58.426 [2024-11-26 07:29:26.434911] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.426 [2024-11-26 07:29:26.445796] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:58.426 [2024-11-26 07:29:26.445815] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:58.426 [2024-11-26 07:29:26.445836] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:58.426 [2024-11-26 07:29:26.446347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b56190 (107): Transport endpoint is not connected 00:19:58.426 [2024-11-26 07:29:26.447340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b56190 (9): Bad file descriptor 00:19:58.426 [2024-11-26 07:29:26.448342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:58.426 [2024-11-26 07:29:26.448351] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:58.426 [2024-11-26 07:29:26.448357] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:58.426 [2024-11-26 07:29:26.448367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:58.426 request: 00:19:58.426 { 00:19:58.426 "name": "TLSTEST", 00:19:58.426 "trtype": "tcp", 00:19:58.426 "traddr": "10.0.0.2", 00:19:58.426 "adrfam": "ipv4", 00:19:58.426 "trsvcid": "4420", 00:19:58.426 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:58.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.426 "prchk_reftag": false, 00:19:58.426 "prchk_guard": false, 00:19:58.426 "hdgst": false, 00:19:58.426 "ddgst": false, 00:19:58.426 "psk": "key0", 00:19:58.426 "allow_unrecognized_csi": false, 00:19:58.426 "method": "bdev_nvme_attach_controller", 00:19:58.426 "req_id": 1 00:19:58.426 } 00:19:58.426 Got JSON-RPC error response 00:19:58.426 response: 00:19:58.426 { 00:19:58.426 "code": -5, 00:19:58.426 "message": "Input/output error" 00:19:58.426 } 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 755362 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 755362 ']' 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 755362 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755362 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755362' 00:19:58.426 killing process with pid 755362 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 755362 00:19:58.426 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.426 00:19:58.426 Latency(us) 00:19:58.426 [2024-11-26T06:29:26.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.426 [2024-11-26T06:29:26.526Z] =================================================================================================================== 00:19:58.426 [2024-11-26T06:29:26.526Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.426 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 755362 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=755379 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 755379 /var/tmp/bdevperf.sock 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 755379 ']' 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.685 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.685 [2024-11-26 07:29:26.700910] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:19:58.685 [2024-11-26 07:29:26.700978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755379 ] 00:19:58.685 [2024-11-26 07:29:26.759897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.944 [2024-11-26 07:29:26.799525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.944 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.944 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:58.944 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:59.202 [2024-11-26 07:29:27.069392] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:59.202 [2024-11-26 07:29:27.069425] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:59.202 request: 00:19:59.202 { 00:19:59.202 "name": "key0", 00:19:59.202 "path": "", 00:19:59.202 "method": "keyring_file_add_key", 00:19:59.202 "req_id": 1 00:19:59.202 } 00:19:59.202 Got JSON-RPC error response 00:19:59.202 response: 00:19:59.202 { 00:19:59.202 "code": -1, 00:19:59.202 "message": "Operation not permitted" 00:19:59.202 } 00:19:59.203 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.203 [2024-11-26 07:29:27.265996] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.203 [2024-11-26 07:29:27.266026] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:59.203 request: 00:19:59.203 { 00:19:59.203 "name": "TLSTEST", 00:19:59.203 "trtype": "tcp", 00:19:59.203 "traddr": "10.0.0.2", 00:19:59.203 "adrfam": "ipv4", 00:19:59.203 "trsvcid": "4420", 00:19:59.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.203 "prchk_reftag": false, 00:19:59.203 "prchk_guard": false, 00:19:59.203 "hdgst": false, 00:19:59.203 "ddgst": false, 00:19:59.203 "psk": "key0", 00:19:59.203 "allow_unrecognized_csi": false, 00:19:59.203 "method": "bdev_nvme_attach_controller", 00:19:59.203 "req_id": 1 00:19:59.203 } 00:19:59.203 Got JSON-RPC error response 00:19:59.203 response: 00:19:59.203 { 00:19:59.203 "code": -126, 00:19:59.203 "message": "Required key not available" 00:19:59.203 } 00:19:59.203 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 755379 00:19:59.203 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 755379 ']' 00:19:59.203 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 755379 00:19:59.203 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.203 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.203 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755379 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755379' 00:19:59.461 killing process with pid 755379 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 755379 00:19:59.461 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.461 00:19:59.461 Latency(us) 00:19:59.461 [2024-11-26T06:29:27.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.461 [2024-11-26T06:29:27.561Z] =================================================================================================================== 00:19:59.461 [2024-11-26T06:29:27.561Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 755379 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 750841 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 750841 ']' 00:19:59.461 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 750841 00:19:59.462 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.462 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.462 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 750841 00:19:59.462 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:59.462 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:59.462 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 750841' 00:19:59.462 killing process with pid 750841 00:19:59.462 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 750841 00:19:59.462 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 750841 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.XeW1M5Ac8P 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.XeW1M5Ac8P 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=755630 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 755630 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 755630 ']' 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.721 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.721 [2024-11-26 07:29:27.814190] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:19:59.721 [2024-11-26 07:29:27.814240] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.980 [2024-11-26 07:29:27.878472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.980 [2024-11-26 07:29:27.915783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.980 [2024-11-26 07:29:27.915818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.980 [2024-11-26 07:29:27.915826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.980 [2024-11-26 07:29:27.915831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.980 [2024-11-26 07:29:27.915836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.980 [2024-11-26 07:29:27.916404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.980 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.980 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:59.980 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.980 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:59.980 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.980 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.980 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.XeW1M5Ac8P 00:19:59.980 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XeW1M5Ac8P 00:19:59.980 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.239 [2024-11-26 07:29:28.219647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.239 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:00.498 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:00.757 [2024-11-26 07:29:28.600633] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.757 [2024-11-26 07:29:28.600860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.757 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:00.757 malloc0 00:20:00.757 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.015 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XeW1M5Ac8P 00:20:01.274 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XeW1M5Ac8P 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XeW1M5Ac8P 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=755884 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 755884 /var/tmp/bdevperf.sock 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 755884 ']' 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.535 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.535 [2024-11-26 07:29:29.406172] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:01.536 [2024-11-26 07:29:29.406219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755884 ] 00:20:01.536 [2024-11-26 07:29:29.462620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.536 [2024-11-26 07:29:29.502542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.536 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.536 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.536 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XeW1M5Ac8P 00:20:01.795 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:02.054 [2024-11-26 07:29:29.957095] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.054 TLSTESTn1 00:20:02.054 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:02.054 Running I/O for 10 seconds... 00:20:04.366 5304.00 IOPS, 20.72 MiB/s [2024-11-26T06:29:33.403Z] 5243.00 IOPS, 20.48 MiB/s [2024-11-26T06:29:34.339Z] 5334.33 IOPS, 20.84 MiB/s [2024-11-26T06:29:35.276Z] 5383.75 IOPS, 21.03 MiB/s [2024-11-26T06:29:36.212Z] 5393.00 IOPS, 21.07 MiB/s [2024-11-26T06:29:37.589Z] 5393.83 IOPS, 21.07 MiB/s [2024-11-26T06:29:38.523Z] 5393.43 IOPS, 21.07 MiB/s [2024-11-26T06:29:39.497Z] 5409.75 IOPS, 21.13 MiB/s [2024-11-26T06:29:40.431Z] 5381.22 IOPS, 21.02 MiB/s [2024-11-26T06:29:40.431Z] 5282.50 IOPS, 20.63 MiB/s 00:20:12.331 Latency(us) 00:20:12.331 [2024-11-26T06:29:40.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.331 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.331 Verification LBA range: start 0x0 length 0x2000 00:20:12.331 TLSTESTn1 : 10.03 5281.97 20.63 0.00 0.00 24190.65 6582.09 32824.99 00:20:12.331 [2024-11-26T06:29:40.431Z] =================================================================================================================== 00:20:12.331 [2024-11-26T06:29:40.431Z] Total : 5281.97 20.63 0.00 0.00 24190.65 6582.09 32824.99 00:20:12.331 { 00:20:12.331 "results": [ 00:20:12.331 { 00:20:12.331 "job": "TLSTESTn1", 00:20:12.331 "core_mask": "0x4", 00:20:12.331 "workload": "verify", 00:20:12.331 "status": "finished", 00:20:12.331 "verify_range": { 00:20:12.331 "start": 0, 00:20:12.331 "length": 8192 00:20:12.331 }, 00:20:12.331 "queue_depth": 128, 00:20:12.331 "io_size": 4096, 00:20:12.331 "runtime": 10.025043, 00:20:12.331 "iops": 5281.972356627298, 00:20:12.331 "mibps": 20.632704518075382, 00:20:12.331 "io_failed": 0, 00:20:12.331 "io_timeout": 0, 00:20:12.331 "avg_latency_us": 24190.65092874925, 00:20:12.331 "min_latency_us": 6582.093913043478, 00:20:12.331 "max_latency_us": 32824.98782608696 00:20:12.331 } 00:20:12.331 ], 00:20:12.331 "core_count": 1 00:20:12.331 } 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 755884 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 755884 ']' 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 755884 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755884 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755884' 00:20:12.331 killing process with pid 755884 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 755884 00:20:12.331 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.331 00:20:12.331 Latency(us) 00:20:12.331 [2024-11-26T06:29:40.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.331 [2024-11-26T06:29:40.431Z] =================================================================================================================== 00:20:12.331 [2024-11-26T06:29:40.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 755884 00:20:12.331 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.XeW1M5Ac8P 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XeW1M5Ac8P 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XeW1M5Ac8P 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XeW1M5Ac8P 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XeW1M5Ac8P 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=757718 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 757718 /var/tmp/bdevperf.sock 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 757718 ']' 00:20:12.332 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.589 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.589 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.590 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.590 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.590 [2024-11-26 07:29:40.471822] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:12.590 [2024-11-26 07:29:40.471871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757718 ] 00:20:12.590 [2024-11-26 07:29:40.529898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.590 [2024-11-26 07:29:40.567575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.590 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.590 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:12.590 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XeW1M5Ac8P 00:20:12.848 [2024-11-26 07:29:40.825668] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XeW1M5Ac8P': 0100666 00:20:12.848 [2024-11-26 07:29:40.825702] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:12.848 request: 00:20:12.848 { 00:20:12.848 "name": "key0", 00:20:12.848 "path": "/tmp/tmp.XeW1M5Ac8P", 00:20:12.848 "method": "keyring_file_add_key", 00:20:12.848 "req_id": 1 00:20:12.848 } 00:20:12.848 Got JSON-RPC error response 00:20:12.848 response: 00:20:12.848 { 00:20:12.848 "code": -1, 00:20:12.848 "message": "Operation not permitted" 00:20:12.848 } 00:20:12.848 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:13.106 [2024-11-26 07:29:41.014241] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.106 [2024-11-26 07:29:41.014270] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:13.106 request: 00:20:13.106 { 00:20:13.106 "name": "TLSTEST", 00:20:13.106 "trtype": "tcp", 00:20:13.106 "traddr": "10.0.0.2", 00:20:13.106 "adrfam": "ipv4", 00:20:13.106 "trsvcid": "4420", 00:20:13.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.106 "prchk_reftag": false, 00:20:13.106 "prchk_guard": false, 00:20:13.106 "hdgst": false, 00:20:13.106 "ddgst": false, 00:20:13.106 "psk": "key0", 00:20:13.106 "allow_unrecognized_csi": false, 00:20:13.106 "method": "bdev_nvme_attach_controller", 00:20:13.106 "req_id": 1 00:20:13.106 } 00:20:13.106 Got JSON-RPC error response 00:20:13.106 response: 00:20:13.106 { 00:20:13.106 "code": -126, 00:20:13.106 "message": "Required key not available" 00:20:13.106 } 00:20:13.106 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 757718 00:20:13.106 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 757718 ']' 00:20:13.106 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 757718 00:20:13.106 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.106 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.106 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 757718 00:20:13.106 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:13.106 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:13.106 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 757718' 00:20:13.106 killing process with pid 757718 00:20:13.106 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 757718 00:20:13.106 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.106 00:20:13.106 Latency(us) 00:20:13.106 [2024-11-26T06:29:41.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.107 [2024-11-26T06:29:41.207Z] =================================================================================================================== 00:20:13.107 [2024-11-26T06:29:41.207Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.107 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 757718 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 755630 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 755630 ']' 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 755630 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755630 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755630' 00:20:13.366 killing process with pid 755630 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 755630 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 755630 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.366 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=757959 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 757959 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 757959 ']' 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.625 [2024-11-26 07:29:41.515146] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:13.625 [2024-11-26 07:29:41.515193] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.625 [2024-11-26 07:29:41.582431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.625 [2024-11-26 07:29:41.618601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.625 [2024-11-26 07:29:41.618635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.625 [2024-11-26 07:29:41.618646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.625 [2024-11-26 07:29:41.618652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.625 [2024-11-26 07:29:41.618673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.625 [2024-11-26 07:29:41.619252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:13.625 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.XeW1M5Ac8P 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.XeW1M5Ac8P 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.XeW1M5Ac8P 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XeW1M5Ac8P 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:13.884 [2024-11-26 07:29:41.914113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.884 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.142 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:14.401 [2024-11-26 07:29:42.287070] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.401 [2024-11-26 07:29:42.287282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.401 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:14.401 malloc0 00:20:14.659 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:14.659 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XeW1M5Ac8P 00:20:14.918 [2024-11-26 07:29:42.844546] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XeW1M5Ac8P': 0100666 00:20:14.918 [2024-11-26 07:29:42.844574] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:14.918 request: 00:20:14.918 { 00:20:14.918 "name": "key0", 00:20:14.918 "path": "/tmp/tmp.XeW1M5Ac8P", 00:20:14.918 "method": "keyring_file_add_key", 00:20:14.918 "req_id": 1 00:20:14.918 } 00:20:14.918 Got JSON-RPC error response 00:20:14.918 response: 00:20:14.918 { 00:20:14.918 "code": -1, 00:20:14.918 "message": "Operation not permitted" 00:20:14.918 } 00:20:14.918 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:15.177 [2024-11-26 07:29:43.021027] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:15.177 [2024-11-26 07:29:43.021057] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:15.177 request: 00:20:15.177 { 00:20:15.177 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.177 "host": "nqn.2016-06.io.spdk:host1", 00:20:15.177 "psk": "key0", 00:20:15.177 "method": "nvmf_subsystem_add_host", 00:20:15.177 "req_id": 1 00:20:15.177 } 00:20:15.177 Got JSON-RPC error response 00:20:15.177 response: 00:20:15.177 { 00:20:15.177 "code": -32603, 00:20:15.177 "message": "Internal error" 00:20:15.177 } 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 757959 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 757959 ']' 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 757959 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 757959 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 757959' 00:20:15.177 killing process with pid 757959 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 757959 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 757959 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.XeW1M5Ac8P 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.177 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=758226 00:20:15.178 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:15.178 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 758226 00:20:15.178 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 758226 ']' 00:20:15.178 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.178 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.178 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.178 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.178 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.436 [2024-11-26 07:29:43.321195] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:15.436 [2024-11-26 07:29:43.321243] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.436 [2024-11-26 07:29:43.388524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.436 [2024-11-26 07:29:43.425320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.436 [2024-11-26 07:29:43.425355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.436 [2024-11-26 07:29:43.425363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.436 [2024-11-26 07:29:43.425369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.436 [2024-11-26 07:29:43.425374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.436 [2024-11-26 07:29:43.425979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.436 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.436 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:15.436 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:15.436 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:15.436 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.695 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.695 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.XeW1M5Ac8P 00:20:15.695 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XeW1M5Ac8P 00:20:15.695 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:15.695 [2024-11-26 07:29:43.717210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.695 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:15.954 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:16.213 [2024-11-26 07:29:44.098192] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.213 [2024-11-26 07:29:44.098394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.213 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:16.213 malloc0 00:20:16.214 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:16.471 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XeW1M5Ac8P 00:20:16.729 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:16.988 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:16.988 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=758481 00:20:16.988 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:16.988 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 758481 /var/tmp/bdevperf.sock 00:20:16.988 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 758481 ']' 00:20:16.988 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.988 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.988 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.988 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.988 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.988 [2024-11-26 07:29:44.883613] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:16.988 [2024-11-26 07:29:44.883663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758481 ] 00:20:16.988 [2024-11-26 07:29:44.942275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.988 [2024-11-26 07:29:44.982623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.988 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.988 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:16.988 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XeW1M5Ac8P 00:20:17.246 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:17.504 [2024-11-26 07:29:45.441315] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.504 TLSTESTn1 00:20:17.504 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:17.762 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:17.762 "subsystems": [ 00:20:17.762 { 00:20:17.762 "subsystem": "keyring", 00:20:17.762 "config": [ 00:20:17.762 { 00:20:17.762 "method": "keyring_file_add_key", 00:20:17.762 "params": { 00:20:17.762 "name": "key0", 00:20:17.762 "path": "/tmp/tmp.XeW1M5Ac8P" 00:20:17.762 } 00:20:17.762 } 00:20:17.762 ] 00:20:17.762 }, 00:20:17.762 { 00:20:17.763 "subsystem": "iobuf", 00:20:17.763 "config": [ 00:20:17.763 { 00:20:17.763 "method": "iobuf_set_options", 00:20:17.763 "params": { 00:20:17.763 "small_pool_count": 8192, 00:20:17.763 "large_pool_count": 1024, 00:20:17.763 "small_bufsize": 8192, 00:20:17.763 "large_bufsize": 135168, 00:20:17.763 "enable_numa": false 00:20:17.763 } 00:20:17.763 } 00:20:17.763 ] 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "subsystem": "sock", 00:20:17.763 "config": [ 00:20:17.763 { 00:20:17.763 "method": "sock_set_default_impl", 00:20:17.763 "params": { 00:20:17.763 "impl_name": "posix" 00:20:17.763 } 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "method": "sock_impl_set_options", 00:20:17.763 "params": { 00:20:17.763 "impl_name": "ssl", 00:20:17.763 "recv_buf_size": 4096, 00:20:17.763 "send_buf_size": 4096, 00:20:17.763 "enable_recv_pipe": true, 00:20:17.763 "enable_quickack": false, 00:20:17.763 "enable_placement_id": 0, 00:20:17.763 "enable_zerocopy_send_server": true, 00:20:17.763 "enable_zerocopy_send_client": false, 00:20:17.763 "zerocopy_threshold": 0, 00:20:17.763 "tls_version": 0, 00:20:17.763 "enable_ktls": false 00:20:17.763 } 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "method": "sock_impl_set_options", 00:20:17.763 "params": { 00:20:17.763 "impl_name": "posix", 00:20:17.763 "recv_buf_size": 2097152, 00:20:17.763 "send_buf_size": 2097152, 00:20:17.763 "enable_recv_pipe": true, 00:20:17.763 "enable_quickack": false, 00:20:17.763 "enable_placement_id": 0, 00:20:17.763 "enable_zerocopy_send_server": true, 00:20:17.763 "enable_zerocopy_send_client": false, 00:20:17.763 "zerocopy_threshold": 0, 00:20:17.763 "tls_version": 0, 00:20:17.763 "enable_ktls": false 00:20:17.763 } 00:20:17.763 } 00:20:17.763 ] 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "subsystem": "vmd", 00:20:17.763 "config": [] 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "subsystem": "accel", 00:20:17.763 "config": [ 00:20:17.763 { 00:20:17.763 "method": "accel_set_options", 00:20:17.763 "params": { 00:20:17.763 "small_cache_size": 128, 00:20:17.763 "large_cache_size": 16, 00:20:17.763 "task_count": 2048, 00:20:17.763 "sequence_count": 2048, 00:20:17.763 "buf_count": 2048 00:20:17.763 } 00:20:17.763 } 00:20:17.763 ] 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "subsystem": "bdev", 00:20:17.763 "config": [ 00:20:17.763 { 00:20:17.763 "method": "bdev_set_options", 00:20:17.763 "params": { 00:20:17.763 "bdev_io_pool_size": 65535, 00:20:17.763 "bdev_io_cache_size": 256, 00:20:17.763 "bdev_auto_examine": true, 00:20:17.763 "iobuf_small_cache_size": 128, 00:20:17.763 "iobuf_large_cache_size": 16 00:20:17.763 } 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "method": "bdev_raid_set_options", 00:20:17.763 "params": { 00:20:17.763 "process_window_size_kb": 1024, 00:20:17.763 "process_max_bandwidth_mb_sec": 0 00:20:17.763 } 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "method": "bdev_iscsi_set_options", 00:20:17.763 "params": { 00:20:17.763 "timeout_sec": 30 00:20:17.763 } 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "method": "bdev_nvme_set_options", 00:20:17.763 "params": { 00:20:17.763 "action_on_timeout": "none", 00:20:17.763 "timeout_us": 0, 00:20:17.763 "timeout_admin_us": 0, 00:20:17.763 "keep_alive_timeout_ms": 10000, 00:20:17.763 "arbitration_burst": 0, 00:20:17.763 "low_priority_weight": 0, 00:20:17.763 "medium_priority_weight": 0, 00:20:17.763 "high_priority_weight": 0, 00:20:17.763 "nvme_adminq_poll_period_us": 10000, 00:20:17.763 "nvme_ioq_poll_period_us": 0, 00:20:17.763 "io_queue_requests": 0, 00:20:17.763 "delay_cmd_submit": true, 00:20:17.763 "transport_retry_count": 4, 00:20:17.763 "bdev_retry_count": 3, 00:20:17.763 "transport_ack_timeout": 0, 00:20:17.763 "ctrlr_loss_timeout_sec": 0, 00:20:17.763 "reconnect_delay_sec": 0, 00:20:17.763 "fast_io_fail_timeout_sec": 0, 00:20:17.763 "disable_auto_failback": false, 00:20:17.763 "generate_uuids": false, 00:20:17.763 "transport_tos": 0, 00:20:17.763 "nvme_error_stat": false, 00:20:17.763 "rdma_srq_size": 0, 00:20:17.763 "io_path_stat": false, 00:20:17.763 "allow_accel_sequence": false, 00:20:17.763 "rdma_max_cq_size": 0, 00:20:17.763 "rdma_cm_event_timeout_ms": 0, 00:20:17.763 "dhchap_digests": [ 00:20:17.763 "sha256", 00:20:17.763 "sha384", 00:20:17.763 "sha512" 00:20:17.763 ], 00:20:17.763 "dhchap_dhgroups": [ 00:20:17.763 "null", 00:20:17.763 "ffdhe2048", 00:20:17.763 "ffdhe3072", 00:20:17.763 "ffdhe4096", 00:20:17.763 "ffdhe6144", 00:20:17.763 "ffdhe8192" 00:20:17.763 ] 00:20:17.763 } 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "method": "bdev_nvme_set_hotplug", 00:20:17.763 "params": { 00:20:17.763 "period_us": 100000, 00:20:17.763 "enable": false 00:20:17.763 } 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "method": "bdev_malloc_create", 00:20:17.763 "params": { 00:20:17.763 "name": "malloc0", 00:20:17.763 "num_blocks": 8192, 00:20:17.763 "block_size": 4096, 00:20:17.763 "physical_block_size": 4096, 00:20:17.763 "uuid": "11ea3290-2720-415f-8eb5-b29c7cf15974", 00:20:17.763 "optimal_io_boundary": 0, 00:20:17.763 "md_size": 0, 00:20:17.763 "dif_type": 0, 00:20:17.763 "dif_is_head_of_md": false, 00:20:17.763 "dif_pi_format": 0 00:20:17.763 } 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "method": "bdev_wait_for_examine" 00:20:17.763 } 00:20:17.763 ] 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "subsystem": "nbd", 00:20:17.763 "config": [] 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "subsystem": "scheduler", 00:20:17.763 "config": [ 00:20:17.763 { 00:20:17.763 "method": "framework_set_scheduler", 00:20:17.763 "params": { 00:20:17.763 "name": "static" 00:20:17.763 } 00:20:17.763 } 00:20:17.763 ] 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "subsystem": "nvmf", 00:20:17.763 "config": [ 00:20:17.763 { 00:20:17.763 "method": "nvmf_set_config", 00:20:17.763 "params": { 00:20:17.763 "discovery_filter": "match_any", 00:20:17.763 "admin_cmd_passthru": { 00:20:17.763 "identify_ctrlr": false 00:20:17.763 }, 00:20:17.763 "dhchap_digests": [ 00:20:17.763 "sha256", 00:20:17.763 "sha384", 00:20:17.763 "sha512" 00:20:17.763 ], 00:20:17.763 "dhchap_dhgroups": [ 00:20:17.763 "null", 00:20:17.763 "ffdhe2048", 00:20:17.763 "ffdhe3072", 00:20:17.763 "ffdhe4096", 00:20:17.763 "ffdhe6144", 00:20:17.763 "ffdhe8192" 00:20:17.763 ] 00:20:17.763 } 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "method": "nvmf_set_max_subsystems", 00:20:17.763 "params": { 00:20:17.763 "max_subsystems": 1024 00:20:17.763 } 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "method": "nvmf_set_crdt", 00:20:17.763 "params": { 00:20:17.763 "crdt1": 0, 00:20:17.763 "crdt2": 0, 00:20:17.763 "crdt3": 0 00:20:17.763 } 00:20:17.763 }, 00:20:17.763 { 00:20:17.763 "method": "nvmf_create_transport", 00:20:17.763 "params": { 00:20:17.763 "trtype": "TCP", 00:20:17.763 "max_queue_depth": 128, 00:20:17.763 "max_io_qpairs_per_ctrlr": 127, 00:20:17.763 "in_capsule_data_size": 4096, 00:20:17.763 "max_io_size": 131072, 00:20:17.763 "io_unit_size": 131072, 00:20:17.763 "max_aq_depth": 128, 00:20:17.763 "num_shared_buffers": 511, 00:20:17.763 "buf_cache_size": 4294967295, 00:20:17.763 "dif_insert_or_strip": false, 00:20:17.763 "zcopy": false, 00:20:17.763 "c2h_success": false, 00:20:17.763 "sock_priority": 0, 00:20:17.764 "abort_timeout_sec": 1, 00:20:17.764 "ack_timeout": 0, 00:20:17.764 "data_wr_pool_size": 0 00:20:17.764 } 00:20:17.764 }, 00:20:17.764 { 00:20:17.764 "method": "nvmf_create_subsystem", 00:20:17.764 "params": { 00:20:17.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.764 "allow_any_host": false, 00:20:17.764 "serial_number": "SPDK00000000000001", 00:20:17.764 "model_number": "SPDK bdev Controller", 00:20:17.764 "max_namespaces": 10, 00:20:17.764 "min_cntlid": 1, 00:20:17.764 "max_cntlid": 65519, 00:20:17.764 "ana_reporting": false 00:20:17.764 } 00:20:17.764 }, 00:20:17.764 { 00:20:17.764 "method": "nvmf_subsystem_add_host", 00:20:17.764 "params": { 00:20:17.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.764 "host": "nqn.2016-06.io.spdk:host1", 00:20:17.764 "psk": "key0" 00:20:17.764 } 00:20:17.764 }, 00:20:17.764 { 00:20:17.764 "method": "nvmf_subsystem_add_ns", 00:20:17.764 "params": { 00:20:17.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.764 "namespace": { 00:20:17.764 "nsid": 1, 00:20:17.764 "bdev_name": "malloc0", 00:20:17.764 "nguid": "11EA32902720415F8EB5B29C7CF15974", 00:20:17.764 "uuid": "11ea3290-2720-415f-8eb5-b29c7cf15974", 00:20:17.764 "no_auto_visible": false 00:20:17.764 } 00:20:17.764 } 00:20:17.764 }, 00:20:17.764 { 00:20:17.764 "method": "nvmf_subsystem_add_listener", 00:20:17.764 "params": { 00:20:17.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.764 "listen_address": { 00:20:17.764 "trtype": "TCP", 00:20:17.764 "adrfam": "IPv4", 00:20:17.764 "traddr": "10.0.0.2", 00:20:17.764 "trsvcid": "4420" 00:20:17.764 }, 00:20:17.764 "secure_channel": true 00:20:17.764 } 00:20:17.764 } 00:20:17.764 ] 00:20:17.764 } 00:20:17.764 ] 00:20:17.764 }' 00:20:17.764 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:18.022 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:18.022 "subsystems": [ 00:20:18.022 { 00:20:18.022 "subsystem": "keyring", 00:20:18.022 "config": [ 00:20:18.022 { 00:20:18.022 "method": "keyring_file_add_key", 00:20:18.022 "params": { 00:20:18.022 "name": "key0", 00:20:18.022 "path": "/tmp/tmp.XeW1M5Ac8P" 00:20:18.022 } 00:20:18.023 } 00:20:18.023 ] 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "subsystem": "iobuf", 00:20:18.023 "config": [ 00:20:18.023 { 00:20:18.023 "method": "iobuf_set_options", 00:20:18.023 "params": { 00:20:18.023 "small_pool_count": 8192, 00:20:18.023 "large_pool_count": 1024, 00:20:18.023 "small_bufsize": 8192, 00:20:18.023 "large_bufsize": 135168, 00:20:18.023 "enable_numa": false 00:20:18.023 } 00:20:18.023 } 00:20:18.023 ] 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "subsystem": "sock", 00:20:18.023 "config": [ 00:20:18.023 { 00:20:18.023 "method": "sock_set_default_impl", 00:20:18.023 "params": { 00:20:18.023 "impl_name": "posix" 00:20:18.023 } 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "method": "sock_impl_set_options", 00:20:18.023 "params": { 00:20:18.023 "impl_name": "ssl", 00:20:18.023 "recv_buf_size": 4096, 00:20:18.023 "send_buf_size": 4096, 00:20:18.023 "enable_recv_pipe": true, 00:20:18.023 "enable_quickack": false, 00:20:18.023 "enable_placement_id": 0, 00:20:18.023 "enable_zerocopy_send_server": true, 00:20:18.023 "enable_zerocopy_send_client": false, 00:20:18.023 "zerocopy_threshold": 0, 00:20:18.023 "tls_version": 0, 00:20:18.023 "enable_ktls": false 00:20:18.023 } 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "method": "sock_impl_set_options", 00:20:18.023 "params": { 00:20:18.023 "impl_name": "posix", 00:20:18.023 "recv_buf_size": 2097152, 00:20:18.023 "send_buf_size": 2097152, 00:20:18.023 "enable_recv_pipe": true, 00:20:18.023 "enable_quickack": false, 00:20:18.023 "enable_placement_id": 0, 00:20:18.023 "enable_zerocopy_send_server": true, 00:20:18.023 "enable_zerocopy_send_client": false, 00:20:18.023 "zerocopy_threshold": 0, 00:20:18.023 "tls_version": 0, 00:20:18.023 "enable_ktls": false 00:20:18.023 } 00:20:18.023 } 00:20:18.023 ] 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "subsystem": "vmd", 00:20:18.023 "config": [] 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "subsystem": "accel", 00:20:18.023 "config": [ 00:20:18.023 { 00:20:18.023 "method": "accel_set_options", 00:20:18.023 "params": { 00:20:18.023 "small_cache_size": 128, 00:20:18.023 "large_cache_size": 16, 00:20:18.023 "task_count": 2048, 00:20:18.023 "sequence_count": 2048, 00:20:18.023 "buf_count": 2048 00:20:18.023 } 00:20:18.023 } 00:20:18.023 ] 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "subsystem": "bdev", 00:20:18.023 "config": [ 00:20:18.023 { 00:20:18.023 "method": "bdev_set_options", 00:20:18.023 "params": { 00:20:18.023 "bdev_io_pool_size": 65535, 00:20:18.023 "bdev_io_cache_size": 256, 00:20:18.023 "bdev_auto_examine": true, 00:20:18.023 "iobuf_small_cache_size": 128, 00:20:18.023 "iobuf_large_cache_size": 16 00:20:18.023 } 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "method": "bdev_raid_set_options", 00:20:18.023 "params": { 00:20:18.023 "process_window_size_kb": 1024, 00:20:18.023 "process_max_bandwidth_mb_sec": 0 00:20:18.023 } 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "method": "bdev_iscsi_set_options", 00:20:18.023 "params": { 00:20:18.023 "timeout_sec": 30 00:20:18.023 } 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "method": "bdev_nvme_set_options", 00:20:18.023 "params": { 00:20:18.023 "action_on_timeout": "none", 00:20:18.023 "timeout_us": 0, 00:20:18.023 "timeout_admin_us": 0, 00:20:18.023 "keep_alive_timeout_ms": 10000, 00:20:18.023 "arbitration_burst": 0, 00:20:18.023 "low_priority_weight": 0, 00:20:18.023 "medium_priority_weight": 0, 00:20:18.023 "high_priority_weight": 0, 00:20:18.023 "nvme_adminq_poll_period_us": 10000, 00:20:18.023 "nvme_ioq_poll_period_us": 0, 00:20:18.023 "io_queue_requests": 512, 00:20:18.023 "delay_cmd_submit": true, 00:20:18.023 "transport_retry_count": 4, 00:20:18.023 "bdev_retry_count": 3, 00:20:18.023 "transport_ack_timeout": 0, 00:20:18.023 "ctrlr_loss_timeout_sec": 0, 00:20:18.023 "reconnect_delay_sec": 0, 00:20:18.023 "fast_io_fail_timeout_sec": 0, 00:20:18.023 "disable_auto_failback": false, 00:20:18.023 "generate_uuids": false, 00:20:18.023 "transport_tos": 0, 00:20:18.023 "nvme_error_stat": false, 00:20:18.023 "rdma_srq_size": 0, 00:20:18.023 "io_path_stat": false, 00:20:18.023 "allow_accel_sequence": false, 00:20:18.023 "rdma_max_cq_size": 0, 00:20:18.023 "rdma_cm_event_timeout_ms": 0, 00:20:18.023 "dhchap_digests": [ 00:20:18.023 "sha256", 00:20:18.023 "sha384", 00:20:18.023 "sha512" 00:20:18.023 ], 00:20:18.023 "dhchap_dhgroups": [ 00:20:18.023 "null", 00:20:18.023 "ffdhe2048", 00:20:18.023 "ffdhe3072", 00:20:18.023 "ffdhe4096", 00:20:18.023 "ffdhe6144", 00:20:18.023 "ffdhe8192" 00:20:18.023 ] 00:20:18.023 } 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "method": "bdev_nvme_attach_controller", 00:20:18.023 "params": { 00:20:18.023 "name": "TLSTEST", 00:20:18.023 "trtype": "TCP", 00:20:18.023 "adrfam": "IPv4", 00:20:18.023 "traddr": "10.0.0.2", 00:20:18.023 "trsvcid": "4420", 00:20:18.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.023 "prchk_reftag": false, 00:20:18.023 "prchk_guard": false, 00:20:18.023 "ctrlr_loss_timeout_sec": 0, 00:20:18.023 "reconnect_delay_sec": 0, 00:20:18.023 "fast_io_fail_timeout_sec": 0, 00:20:18.023 "psk": "key0", 00:20:18.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.023 "hdgst": false, 00:20:18.023 "ddgst": false, 00:20:18.023 "multipath": "multipath" 00:20:18.023 } 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "method": "bdev_nvme_set_hotplug", 00:20:18.023 "params": { 00:20:18.023 "period_us": 100000, 00:20:18.023 "enable": false 00:20:18.023 } 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "method": "bdev_wait_for_examine" 00:20:18.023 } 00:20:18.023 ] 00:20:18.023 }, 00:20:18.023 { 00:20:18.023 "subsystem": "nbd", 00:20:18.023 "config": [] 00:20:18.023 } 00:20:18.023 ] 00:20:18.023 }' 00:20:18.023 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 758481 00:20:18.023 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 758481 ']' 00:20:18.023 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 758481 00:20:18.023 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:18.023 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.023 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 758481 00:20:18.023 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:18.023 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:18.023 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 758481' 00:20:18.023 killing process with pid 758481 00:20:18.023 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 758481 00:20:18.023 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.023 00:20:18.023 Latency(us) 00:20:18.023 [2024-11-26T06:29:46.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.023 [2024-11-26T06:29:46.123Z] =================================================================================================================== 00:20:18.023 [2024-11-26T06:29:46.123Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.023 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 758481 00:20:18.283 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 758226 00:20:18.283 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 758226 ']' 00:20:18.283 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 758226 00:20:18.283 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:18.283 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.283 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 758226 00:20:18.283 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.283 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.283 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 758226' 00:20:18.283 killing process with pid 758226 00:20:18.283 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 758226 00:20:18.283 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 758226 00:20:18.542 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:18.542 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:18.542 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.542 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:18.542 "subsystems": [ 00:20:18.542 { 00:20:18.542 "subsystem": "keyring", 00:20:18.542 "config": [ 00:20:18.542 { 00:20:18.542 "method": "keyring_file_add_key", 00:20:18.542 "params": { 00:20:18.542 "name": "key0", 00:20:18.542 "path": "/tmp/tmp.XeW1M5Ac8P" 00:20:18.542 } 00:20:18.542 } 00:20:18.542 ] 00:20:18.542 }, 00:20:18.542 { 00:20:18.542 "subsystem": "iobuf", 00:20:18.542 "config": [ 00:20:18.542 { 00:20:18.542 "method": "iobuf_set_options", 00:20:18.542 "params": { 00:20:18.542 "small_pool_count": 8192, 00:20:18.542 "large_pool_count": 1024, 00:20:18.542 "small_bufsize": 8192, 00:20:18.542 "large_bufsize": 135168, 00:20:18.542 "enable_numa": false 00:20:18.542 } 00:20:18.542 } 00:20:18.542 ] 00:20:18.542 }, 00:20:18.542 { 00:20:18.542 "subsystem": "sock", 00:20:18.542 "config": [ 00:20:18.542 { 00:20:18.542 "method": "sock_set_default_impl", 00:20:18.542 "params": { 00:20:18.542 "impl_name": "posix" 00:20:18.542 } 00:20:18.542 }, 00:20:18.542 { 00:20:18.542 "method": "sock_impl_set_options", 00:20:18.542 "params": { 00:20:18.542 "impl_name": "ssl", 00:20:18.542 "recv_buf_size": 4096, 00:20:18.542 "send_buf_size": 4096, 00:20:18.542 "enable_recv_pipe": true, 00:20:18.542 "enable_quickack": false, 00:20:18.542 "enable_placement_id": 0, 00:20:18.542 "enable_zerocopy_send_server": true, 00:20:18.542 "enable_zerocopy_send_client": false, 00:20:18.542 "zerocopy_threshold": 0, 00:20:18.542 "tls_version": 0, 00:20:18.542 "enable_ktls": false 00:20:18.542 } 00:20:18.542 }, 00:20:18.542 { 00:20:18.542 "method": "sock_impl_set_options", 00:20:18.542 "params": { 00:20:18.542 "impl_name": "posix", 00:20:18.542 "recv_buf_size": 2097152, 00:20:18.542 "send_buf_size": 2097152, 00:20:18.542 "enable_recv_pipe": true, 00:20:18.542 "enable_quickack": false, 00:20:18.542 "enable_placement_id": 0, 00:20:18.542 "enable_zerocopy_send_server": true, 00:20:18.542 "enable_zerocopy_send_client": false, 00:20:18.542 "zerocopy_threshold": 0, 00:20:18.542 "tls_version": 0, 00:20:18.542 "enable_ktls": false 00:20:18.542 } 00:20:18.542 } 00:20:18.542 ] 00:20:18.542 }, 00:20:18.542 { 00:20:18.542 "subsystem": "vmd", 00:20:18.542 "config": [] 00:20:18.542 }, 00:20:18.542 { 00:20:18.542 "subsystem": "accel", 00:20:18.542 "config": [ 00:20:18.542 { 00:20:18.542 "method": "accel_set_options", 00:20:18.542 "params": { 00:20:18.542 "small_cache_size": 128, 00:20:18.542 "large_cache_size": 16, 00:20:18.542 "task_count": 2048, 00:20:18.542 "sequence_count": 2048, 00:20:18.542 "buf_count": 2048 00:20:18.542 } 00:20:18.542 } 00:20:18.542 ] 00:20:18.542 }, 00:20:18.542 { 00:20:18.542 "subsystem": "bdev", 00:20:18.542 "config": [ 00:20:18.542 { 00:20:18.542 "method": "bdev_set_options", 00:20:18.542 "params": { 00:20:18.542 "bdev_io_pool_size": 65535, 00:20:18.542 "bdev_io_cache_size": 256, 00:20:18.542 "bdev_auto_examine": true, 00:20:18.542 "iobuf_small_cache_size": 128, 00:20:18.542 "iobuf_large_cache_size": 16 00:20:18.542 } 00:20:18.542 }, 00:20:18.543 { 00:20:18.543 "method": "bdev_raid_set_options", 00:20:18.543 "params": { 00:20:18.543 "process_window_size_kb": 1024, 00:20:18.543 "process_max_bandwidth_mb_sec": 0 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "bdev_iscsi_set_options", 00:20:18.543 "params": { 00:20:18.543 "timeout_sec": 30 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "bdev_nvme_set_options", 00:20:18.543 "params": { 00:20:18.543 "action_on_timeout": "none", 00:20:18.543 "timeout_us": 0, 00:20:18.543 "timeout_admin_us": 0, 00:20:18.543 "keep_alive_timeout_ms": 10000, 00:20:18.543 "arbitration_burst": 0, 00:20:18.543 "low_priority_weight": 0, 00:20:18.543 "medium_priority_weight": 0, 00:20:18.543 "high_priority_weight": 0, 00:20:18.543 "nvme_adminq_poll_period_us": 10000, 00:20:18.543 "nvme_ioq_poll_period_us": 0, 00:20:18.543 "io_queue_requests": 0, 00:20:18.543 "delay_cmd_submit": true, 00:20:18.543 "transport_retry_count": 4, 00:20:18.543 "bdev_retry_count": 3, 00:20:18.543 "transport_ack_timeout": 0, 00:20:18.543 "ctrlr_loss_timeout_sec": 0, 00:20:18.543 "reconnect_delay_sec": 0, 00:20:18.543 "fast_io_fail_timeout_sec": 0, 00:20:18.543 "disable_auto_failback": false, 00:20:18.543 "generate_uuids": false, 00:20:18.543 "transport_tos": 0, 00:20:18.543 "nvme_error_stat": false, 00:20:18.543 "rdma_srq_size": 0, 00:20:18.543 "io_path_stat": false, 00:20:18.543 "allow_accel_sequence": false, 00:20:18.543 "rdma_max_cq_size": 0, 00:20:18.543 "rdma_cm_event_timeout_ms": 0, 00:20:18.543 "dhchap_digests": [ 00:20:18.543 "sha256", 00:20:18.543 "sha384", 00:20:18.543 "sha512" 00:20:18.543 ], 00:20:18.543 "dhchap_dhgroups": [ 00:20:18.543 "null", 00:20:18.543 "ffdhe2048", 00:20:18.543 "ffdhe3072", 00:20:18.543 "ffdhe4096", 00:20:18.543 "ffdhe6144", 00:20:18.543 "ffdhe8192" 00:20:18.543 ] 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "bdev_nvme_set_hotplug", 00:20:18.543 "params": { 00:20:18.543 "period_us": 100000, 00:20:18.543 "enable": false 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "bdev_malloc_create", 00:20:18.543 "params": { 00:20:18.543 "name": "malloc0", 00:20:18.543 "num_blocks": 8192, 00:20:18.543 "block_size": 4096, 00:20:18.543 "physical_block_size": 4096, 00:20:18.543 "uuid": "11ea3290-2720-415f-8eb5-b29c7cf15974", 00:20:18.543 "optimal_io_boundary": 0, 00:20:18.543 "md_size": 0, 00:20:18.543 "dif_type": 0, 00:20:18.543 "dif_is_head_of_md": false, 00:20:18.543 "dif_pi_format": 0 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "bdev_wait_for_examine" 00:20:18.543 } 00:20:18.543 ] 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "subsystem": "nbd", 00:20:18.543 "config": [] 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "subsystem": "scheduler", 00:20:18.543 "config": [ 00:20:18.543 { 00:20:18.543 "method": "framework_set_scheduler", 00:20:18.543 "params": { 00:20:18.543 "name": "static" 00:20:18.543 } 00:20:18.543 } 00:20:18.543 ] 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "subsystem": "nvmf", 00:20:18.543 "config": [ 00:20:18.543 { 00:20:18.543 "method": "nvmf_set_config", 00:20:18.543 "params": { 00:20:18.543 "discovery_filter": "match_any", 00:20:18.543 "admin_cmd_passthru": { 00:20:18.543 "identify_ctrlr": false 00:20:18.543 }, 00:20:18.543 "dhchap_digests": [ 00:20:18.543 "sha256", 00:20:18.543 "sha384", 00:20:18.543 "sha512" 00:20:18.543 ], 00:20:18.543 "dhchap_dhgroups": [ 00:20:18.543 "null", 00:20:18.543 "ffdhe2048", 00:20:18.543 "ffdhe3072", 00:20:18.543 "ffdhe4096", 00:20:18.543 "ffdhe6144", 00:20:18.543 "ffdhe8192" 00:20:18.543 ] 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "nvmf_set_max_subsystems", 00:20:18.543 "params": { 00:20:18.543 "max_subsystems": 1024 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "nvmf_set_crdt", 00:20:18.543 "params": { 00:20:18.543 "crdt1": 0, 00:20:18.543 "crdt2": 0, 00:20:18.543 "crdt3": 0 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "nvmf_create_transport", 00:20:18.543 "params": { 00:20:18.543 "trtype": "TCP", 00:20:18.543 "max_queue_depth": 128, 00:20:18.543 "max_io_qpairs_per_ctrlr": 127, 00:20:18.543 "in_capsule_data_size": 4096, 00:20:18.543 "max_io_size": 131072, 00:20:18.543 "io_unit_size": 131072, 00:20:18.543 "max_aq_depth": 128, 00:20:18.543 "num_shared_buffers": 511, 00:20:18.543 "buf_cache_size": 4294967295, 00:20:18.543 "dif_insert_or_strip": false, 00:20:18.543 "zcopy": false, 00:20:18.543 "c2h_success": false, 00:20:18.543 "sock_priority": 0, 00:20:18.543 "abort_timeout_sec": 1, 00:20:18.543 "ack_timeout": 0, 00:20:18.543 "data_wr_pool_size": 0 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "nvmf_create_subsystem", 00:20:18.543 "params": { 00:20:18.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.543 "allow_any_host": false, 00:20:18.543 "serial_number": "SPDK00000000000001", 00:20:18.543 "model_number": "SPDK bdev Controller", 00:20:18.543 "max_namespaces": 10, 00:20:18.543 "min_cntlid": 1, 00:20:18.543 "max_cntlid": 65519, 00:20:18.543 "ana_reporting": false 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "nvmf_subsystem_add_host", 00:20:18.543 "params": { 00:20:18.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.543 "host": "nqn.2016-06.io.spdk:host1", 00:20:18.543 "psk": "key0" 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "nvmf_subsystem_add_ns", 00:20:18.543 "params": { 00:20:18.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.543 "namespace": { 00:20:18.543 "nsid": 1, 00:20:18.543 "bdev_name": "malloc0", 00:20:18.543 "nguid": "11EA32902720415F8EB5B29C7CF15974", 00:20:18.543 "uuid": "11ea3290-2720-415f-8eb5-b29c7cf15974", 00:20:18.543 "no_auto_visible": false 00:20:18.543 } 00:20:18.543 } 00:20:18.543 }, 00:20:18.543 { 00:20:18.543 "method": "nvmf_subsystem_add_listener", 00:20:18.543 "params": { 00:20:18.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.543 "listen_address": { 00:20:18.543 "trtype": "TCP", 00:20:18.543 "adrfam": "IPv4", 00:20:18.543 "traddr": "10.0.0.2", 00:20:18.543 "trsvcid": "4420" 00:20:18.543 }, 00:20:18.543 "secure_channel": true 00:20:18.543 } 00:20:18.543 } 00:20:18.543 ] 00:20:18.543 } 00:20:18.543 ] 00:20:18.543 }' 00:20:18.543 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.543 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=758734 00:20:18.543 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:18.543 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 758734 00:20:18.543 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 758734 ']' 00:20:18.543 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.543 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.543 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.543 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.543 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.543 [2024-11-26 07:29:46.539098] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:18.544 [2024-11-26 07:29:46.539145] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.544 [2024-11-26 07:29:46.604927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.802 [2024-11-26 07:29:46.650345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.802 [2024-11-26 07:29:46.650379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.802 [2024-11-26 07:29:46.650386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.802 [2024-11-26 07:29:46.650392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.802 [2024-11-26 07:29:46.650397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.802 [2024-11-26 07:29:46.650983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.802 [2024-11-26 07:29:46.864532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.802 [2024-11-26 07:29:46.896564] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.802 [2024-11-26 07:29:46.896768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=758977 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 758977 /var/tmp/bdevperf.sock 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 758977 ']' 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.369 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:19.369 "subsystems": [ 00:20:19.369 { 00:20:19.369 "subsystem": "keyring", 00:20:19.369 "config": [ 00:20:19.369 { 00:20:19.369 "method": "keyring_file_add_key", 00:20:19.369 "params": { 00:20:19.369 "name": "key0", 00:20:19.369 "path": "/tmp/tmp.XeW1M5Ac8P" 00:20:19.369 } 00:20:19.369 } 00:20:19.369 ] 00:20:19.369 }, 00:20:19.369 { 00:20:19.369 "subsystem": "iobuf", 00:20:19.369 "config": [ 00:20:19.369 { 00:20:19.370 "method": "iobuf_set_options", 00:20:19.370 "params": { 00:20:19.370 "small_pool_count": 8192, 00:20:19.370 "large_pool_count": 1024, 00:20:19.370 "small_bufsize": 8192, 00:20:19.370 "large_bufsize": 135168, 00:20:19.370 "enable_numa": false 00:20:19.370 } 00:20:19.370 } 00:20:19.370 ] 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "subsystem": "sock", 00:20:19.370 "config": [ 00:20:19.370 { 00:20:19.370 "method": "sock_set_default_impl", 00:20:19.370 "params": { 00:20:19.370 "impl_name": "posix" 00:20:19.370 } 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "method": "sock_impl_set_options", 00:20:19.370 "params": { 00:20:19.370 "impl_name": "ssl", 00:20:19.370 "recv_buf_size": 4096, 00:20:19.370 "send_buf_size": 4096, 00:20:19.370 "enable_recv_pipe": true, 00:20:19.370 "enable_quickack": false, 00:20:19.370 "enable_placement_id": 0, 00:20:19.370 "enable_zerocopy_send_server": true, 00:20:19.370 "enable_zerocopy_send_client": false, 00:20:19.370 "zerocopy_threshold": 0, 00:20:19.370 "tls_version": 0, 00:20:19.370 "enable_ktls": false 00:20:19.370 } 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "method": "sock_impl_set_options", 00:20:19.370 "params": { 00:20:19.370 "impl_name": "posix", 00:20:19.370 "recv_buf_size": 2097152, 00:20:19.370 "send_buf_size": 2097152, 00:20:19.370 "enable_recv_pipe": true, 00:20:19.370 "enable_quickack": false, 00:20:19.370 "enable_placement_id": 0, 00:20:19.370 "enable_zerocopy_send_server": true, 00:20:19.370 "enable_zerocopy_send_client": false, 00:20:19.370 "zerocopy_threshold": 0, 00:20:19.370 "tls_version": 0, 00:20:19.370 "enable_ktls": false 00:20:19.370 } 00:20:19.370 } 00:20:19.370 ] 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "subsystem": "vmd", 00:20:19.370 "config": [] 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "subsystem": "accel", 00:20:19.370 "config": [ 00:20:19.370 { 00:20:19.370 "method": "accel_set_options", 00:20:19.370 "params": { 00:20:19.370 "small_cache_size": 128, 00:20:19.370 "large_cache_size": 16, 00:20:19.370 "task_count": 2048, 00:20:19.370 "sequence_count": 2048, 00:20:19.370 "buf_count": 2048 00:20:19.370 } 00:20:19.370 } 00:20:19.370 ] 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "subsystem": "bdev", 00:20:19.370 "config": [ 00:20:19.370 { 00:20:19.370 "method": "bdev_set_options", 00:20:19.370 "params": { 00:20:19.370 "bdev_io_pool_size": 65535, 00:20:19.370 "bdev_io_cache_size": 256, 00:20:19.370 "bdev_auto_examine": true, 00:20:19.370 "iobuf_small_cache_size": 128, 00:20:19.370 "iobuf_large_cache_size": 16 00:20:19.370 } 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "method": "bdev_raid_set_options", 00:20:19.370 "params": { 00:20:19.370 "process_window_size_kb": 1024, 00:20:19.370 "process_max_bandwidth_mb_sec": 0 00:20:19.370 } 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "method": "bdev_iscsi_set_options", 00:20:19.370 "params": { 00:20:19.370 "timeout_sec": 30 00:20:19.370 } 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "method": "bdev_nvme_set_options", 00:20:19.370 "params": { 00:20:19.370 "action_on_timeout": "none", 00:20:19.370 "timeout_us": 0, 00:20:19.370 "timeout_admin_us": 0, 00:20:19.370 "keep_alive_timeout_ms": 10000, 00:20:19.370 "arbitration_burst": 0, 00:20:19.370 "low_priority_weight": 0, 00:20:19.370 "medium_priority_weight": 0, 00:20:19.370 "high_priority_weight": 0, 00:20:19.370 "nvme_adminq_poll_period_us": 10000, 00:20:19.370 "nvme_ioq_poll_period_us": 0, 00:20:19.370 "io_queue_requests": 512, 00:20:19.370 "delay_cmd_submit": true, 00:20:19.370 "transport_retry_count": 4, 00:20:19.370 "bdev_retry_count": 3, 00:20:19.370 "transport_ack_timeout": 0, 00:20:19.370 "ctrlr_loss_timeout_sec": 0, 00:20:19.370 "reconnect_delay_sec": 0, 00:20:19.370 "fast_io_fail_timeout_sec": 0, 00:20:19.370 "disable_auto_failback": false, 00:20:19.370 "generate_uuids": false, 00:20:19.370 "transport_tos": 0, 00:20:19.370 "nvme_error_stat": false, 00:20:19.370 "rdma_srq_size": 0, 00:20:19.370 "io_path_stat": false, 00:20:19.370 "allow_accel_sequence": false, 00:20:19.370 "rdma_max_cq_size": 0, 00:20:19.370 "rdma_cm_event_timeout_ms": 0Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.370 , 00:20:19.370 "dhchap_digests": [ 00:20:19.370 "sha256", 00:20:19.370 "sha384", 00:20:19.370 "sha512" 00:20:19.370 ], 00:20:19.370 "dhchap_dhgroups": [ 00:20:19.370 "null", 00:20:19.370 "ffdhe2048", 00:20:19.370 "ffdhe3072", 00:20:19.370 "ffdhe4096", 00:20:19.370 "ffdhe6144", 00:20:19.370 "ffdhe8192" 00:20:19.370 ] 00:20:19.370 } 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "method": "bdev_nvme_attach_controller", 00:20:19.370 "params": { 00:20:19.370 "name": "TLSTEST", 00:20:19.370 "trtype": "TCP", 00:20:19.370 "adrfam": "IPv4", 00:20:19.370 "traddr": "10.0.0.2", 00:20:19.370 "trsvcid": "4420", 00:20:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.370 "prchk_reftag": false, 00:20:19.370 "prchk_guard": false, 00:20:19.370 "ctrlr_loss_timeout_sec": 0, 00:20:19.370 "reconnect_delay_sec": 0, 00:20:19.370 "fast_io_fail_timeout_sec": 0, 00:20:19.370 "psk": "key0", 00:20:19.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.370 "hdgst": false, 00:20:19.370 "ddgst": false, 00:20:19.370 "multipath": "multipath" 00:20:19.370 } 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "method": "bdev_nvme_set_hotplug", 00:20:19.370 "params": { 00:20:19.370 "period_us": 100000, 00:20:19.370 "enable": false 00:20:19.370 } 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "method": "bdev_wait_for_examine" 00:20:19.370 } 00:20:19.370 ] 00:20:19.370 }, 00:20:19.370 { 00:20:19.370 "subsystem": "nbd", 00:20:19.370 "config": [] 00:20:19.370 } 00:20:19.370 ] 00:20:19.370 }' 00:20:19.370 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.370 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.370 [2024-11-26 07:29:47.460496] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:19.370 [2024-11-26 07:29:47.460542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid758977 ] 00:20:19.629 [2024-11-26 07:29:47.518316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.629 [2024-11-26 07:29:47.558276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.629 [2024-11-26 07:29:47.711192] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.564 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.564 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:20.564 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:20.564 Running I/O for 10 seconds... 00:20:22.437 4966.00 IOPS, 19.40 MiB/s [2024-11-26T06:29:51.473Z] 4833.50 IOPS, 18.88 MiB/s [2024-11-26T06:29:52.409Z] 4719.00 IOPS, 18.43 MiB/s [2024-11-26T06:29:53.784Z] 4657.75 IOPS, 18.19 MiB/s [2024-11-26T06:29:54.719Z] 4680.80 IOPS, 18.28 MiB/s [2024-11-26T06:29:55.657Z] 4617.83 IOPS, 18.04 MiB/s [2024-11-26T06:29:56.594Z] 4578.43 IOPS, 17.88 MiB/s [2024-11-26T06:29:57.529Z] 4583.62 IOPS, 17.90 MiB/s [2024-11-26T06:29:58.462Z] 4592.89 IOPS, 17.94 MiB/s [2024-11-26T06:29:58.462Z] 4588.50 IOPS, 17.92 MiB/s 00:20:30.362 Latency(us) 00:20:30.362 [2024-11-26T06:29:58.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.362 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:30.362 Verification LBA range: start 0x0 length 0x2000 00:20:30.362 TLSTESTn1 : 10.02 4593.04 17.94 0.00 0.00 27828.38 5271.37 34648.60 00:20:30.362 [2024-11-26T06:29:58.462Z] =================================================================================================================== 00:20:30.362 [2024-11-26T06:29:58.462Z] Total : 4593.04 17.94 0.00 0.00 27828.38 5271.37 34648.60 00:20:30.362 { 00:20:30.362 "results": [ 00:20:30.362 { 00:20:30.362 "job": "TLSTESTn1", 00:20:30.363 "core_mask": "0x4", 00:20:30.363 "workload": "verify", 00:20:30.363 "status": "finished", 00:20:30.363 "verify_range": { 00:20:30.363 "start": 0, 00:20:30.363 "length": 8192 00:20:30.363 }, 00:20:30.363 "queue_depth": 128, 00:20:30.363 "io_size": 4096, 00:20:30.363 "runtime": 10.017767, 00:20:30.363 "iops": 4593.039546637489, 00:20:30.363 "mibps": 17.941560729052693, 00:20:30.363 "io_failed": 0, 00:20:30.363 "io_timeout": 0, 00:20:30.363 "avg_latency_us": 27828.382440573158, 00:20:30.363 "min_latency_us": 5271.373913043478, 00:20:30.363 "max_latency_us": 34648.59826086956 00:20:30.363 } 00:20:30.363 ], 00:20:30.363 "core_count": 1 00:20:30.363 } 00:20:30.363 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:30.363 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 758977 00:20:30.363 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 758977 ']' 00:20:30.363 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 758977 00:20:30.363 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:30.363 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.363 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 758977 00:20:30.620 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:30.620 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:30.621 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 758977' 00:20:30.621 killing process with pid 758977 00:20:30.621 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 758977 00:20:30.621 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.621 00:20:30.621 Latency(us) 00:20:30.621 [2024-11-26T06:29:58.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.621 [2024-11-26T06:29:58.721Z] =================================================================================================================== 00:20:30.621 [2024-11-26T06:29:58.721Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.621 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 758977 00:20:30.621 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 758734 00:20:30.621 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 758734 ']' 00:20:30.621 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 758734 00:20:30.621 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:30.621 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.621 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 758734 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 758734' 00:20:30.879 killing process with pid 758734 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 758734 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 758734 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=760824 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 760824 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 760824 ']' 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.879 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.879 [2024-11-26 07:29:58.918135] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:30.879 [2024-11-26 07:29:58.918182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.138 [2024-11-26 07:29:58.982130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.138 [2024-11-26 07:29:59.022711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.138 [2024-11-26 07:29:59.022742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.138 [2024-11-26 07:29:59.022750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.138 [2024-11-26 07:29:59.022757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.138 [2024-11-26 07:29:59.022762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.138 [2024-11-26 07:29:59.023373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.138 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.138 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:31.138 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:31.138 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:31.138 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:31.138 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.138 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.XeW1M5Ac8P 00:20:31.138 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.XeW1M5Ac8P 00:20:31.138 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:31.395 [2024-11-26 07:29:59.318140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.395 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:31.654 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:31.654 [2024-11-26 07:29:59.711143] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:31.654 [2024-11-26 07:29:59.711354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.654 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:31.913 malloc0 00:20:31.913 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:32.171 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.XeW1M5Ac8P 00:20:32.430 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:32.430 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=761130 00:20:32.430 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:32.430 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.430 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 761130 /var/tmp/bdevperf.sock 00:20:32.430 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 761130 ']' 00:20:32.430 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.430 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.430 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.430 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.430 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.690 [2024-11-26 07:30:00.560259] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:32.690 [2024-11-26 07:30:00.560311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761130 ] 00:20:32.690 [2024-11-26 07:30:00.623895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.690 [2024-11-26 07:30:00.671140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.690 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.690 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:32.690 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XeW1M5Ac8P 00:20:32.948 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:33.206 [2024-11-26 07:30:01.118996] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.207 nvme0n1 00:20:33.207 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:33.207 Running I/O for 1 seconds... 00:20:34.584 5353.00 IOPS, 20.91 MiB/s 00:20:34.584 Latency(us) 00:20:34.584 [2024-11-26T06:30:02.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.584 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:34.584 Verification LBA range: start 0x0 length 0x2000 00:20:34.584 nvme0n1 : 1.01 5416.99 21.16 0.00 0.00 23476.08 4729.99 24162.84 00:20:34.584 [2024-11-26T06:30:02.684Z] =================================================================================================================== 00:20:34.584 [2024-11-26T06:30:02.684Z] Total : 5416.99 21.16 0.00 0.00 23476.08 4729.99 24162.84 00:20:34.584 { 00:20:34.584 "results": [ 00:20:34.584 { 00:20:34.584 "job": "nvme0n1", 00:20:34.584 "core_mask": "0x2", 00:20:34.584 "workload": "verify", 00:20:34.584 "status": "finished", 00:20:34.584 "verify_range": { 00:20:34.584 "start": 0, 00:20:34.584 "length": 8192 00:20:34.584 }, 00:20:34.584 "queue_depth": 128, 00:20:34.584 "io_size": 4096, 00:20:34.584 "runtime": 1.011816, 00:20:34.584 "iops": 5416.992812922507, 00:20:34.584 "mibps": 21.160128175478544, 00:20:34.584 "io_failed": 0, 00:20:34.584 "io_timeout": 0, 00:20:34.584 "avg_latency_us": 23476.080890348476, 00:20:34.584 "min_latency_us": 4729.989565217391, 00:20:34.584 "max_latency_us": 24162.838260869565 00:20:34.584 } 00:20:34.584 ], 00:20:34.584 "core_count": 1 00:20:34.584 } 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 761130 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 761130 ']' 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 761130 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 761130 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 761130' 00:20:34.584 killing process with pid 761130 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 761130 00:20:34.584 Received shutdown signal, test time was about 1.000000 seconds 00:20:34.584 00:20:34.584 Latency(us) 00:20:34.584 [2024-11-26T06:30:02.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.584 [2024-11-26T06:30:02.684Z] =================================================================================================================== 00:20:34.584 [2024-11-26T06:30:02.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 761130 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 760824 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 760824 ']' 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 760824 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 760824 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.584 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 760824' 00:20:34.584 killing process with pid 760824 00:20:34.585 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 760824 00:20:34.585 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 760824 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=761684 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 761684 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 761684 ']' 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.843 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.843 [2024-11-26 07:30:02.811439] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:34.843 [2024-11-26 07:30:02.811486] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.843 [2024-11-26 07:30:02.877535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.843 [2024-11-26 07:30:02.919985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.843 [2024-11-26 07:30:02.920021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.843 [2024-11-26 07:30:02.920028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.843 [2024-11-26 07:30:02.920034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.843 [2024-11-26 07:30:02.920039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.843 [2024-11-26 07:30:02.920613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.102 [2024-11-26 07:30:03.059589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.102 malloc0 00:20:35.102 [2024-11-26 07:30:03.087805] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.102 [2024-11-26 07:30:03.088027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=761708 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 761708 /var/tmp/bdevperf.sock 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 761708 ']' 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.102 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.102 [2024-11-26 07:30:03.161369] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:35.102 [2024-11-26 07:30:03.161409] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761708 ] 00:20:35.361 [2024-11-26 07:30:03.222584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.361 [2024-11-26 07:30:03.263993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.361 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.361 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:35.361 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XeW1M5Ac8P 00:20:35.619 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:35.878 [2024-11-26 07:30:03.732256] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.878 nvme0n1 00:20:35.878 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:35.878 Running I/O for 1 seconds... 00:20:37.253 5348.00 IOPS, 20.89 MiB/s 00:20:37.253 Latency(us) 00:20:37.253 [2024-11-26T06:30:05.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.253 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:37.253 Verification LBA range: start 0x0 length 0x2000 00:20:37.253 nvme0n1 : 1.02 5390.77 21.06 0.00 0.00 23563.18 7009.50 22681.15 00:20:37.253 [2024-11-26T06:30:05.353Z] =================================================================================================================== 00:20:37.253 [2024-11-26T06:30:05.353Z] Total : 5390.77 21.06 0.00 0.00 23563.18 7009.50 22681.15 00:20:37.253 { 00:20:37.253 "results": [ 00:20:37.253 { 00:20:37.253 "job": "nvme0n1", 00:20:37.253 "core_mask": "0x2", 00:20:37.253 "workload": "verify", 00:20:37.253 "status": "finished", 00:20:37.253 "verify_range": { 00:20:37.253 "start": 0, 00:20:37.253 "length": 8192 00:20:37.253 }, 00:20:37.253 "queue_depth": 128, 00:20:37.253 "io_size": 4096, 00:20:37.253 "runtime": 1.015811, 00:20:37.253 "iops": 5390.766589454141, 00:20:37.253 "mibps": 21.057681990055237, 00:20:37.253 "io_failed": 0, 00:20:37.253 "io_timeout": 0, 00:20:37.253 "avg_latency_us": 23563.18084288754, 00:20:37.253 "min_latency_us": 7009.502608695652, 00:20:37.253 "max_latency_us": 22681.154782608697 00:20:37.253 } 00:20:37.253 ], 00:20:37.253 "core_count": 1 00:20:37.253 } 00:20:37.253 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:37.253 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.253 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.253 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.253 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:37.253 "subsystems": [ 00:20:37.253 { 00:20:37.253 "subsystem": "keyring", 00:20:37.253 "config": [ 00:20:37.253 { 00:20:37.253 "method": "keyring_file_add_key", 00:20:37.253 "params": { 00:20:37.253 "name": "key0", 00:20:37.253 "path": "/tmp/tmp.XeW1M5Ac8P" 00:20:37.253 } 00:20:37.253 } 00:20:37.253 ] 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "subsystem": "iobuf", 00:20:37.253 "config": [ 00:20:37.253 { 00:20:37.253 "method": "iobuf_set_options", 00:20:37.253 "params": { 00:20:37.253 "small_pool_count": 8192, 00:20:37.253 "large_pool_count": 1024, 00:20:37.253 "small_bufsize": 8192, 00:20:37.253 "large_bufsize": 135168, 00:20:37.253 "enable_numa": false 00:20:37.253 } 00:20:37.253 } 00:20:37.253 ] 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "subsystem": "sock", 00:20:37.253 "config": [ 00:20:37.253 { 00:20:37.253 "method": "sock_set_default_impl", 00:20:37.253 "params": { 00:20:37.253 "impl_name": "posix" 00:20:37.253 } 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "method": "sock_impl_set_options", 00:20:37.253 "params": { 00:20:37.253 "impl_name": "ssl", 00:20:37.253 "recv_buf_size": 4096, 00:20:37.253 "send_buf_size": 4096, 00:20:37.253 "enable_recv_pipe": true, 00:20:37.253 "enable_quickack": false, 00:20:37.253 "enable_placement_id": 0, 00:20:37.253 "enable_zerocopy_send_server": true, 00:20:37.253 "enable_zerocopy_send_client": false, 00:20:37.253 "zerocopy_threshold": 0, 00:20:37.253 "tls_version": 0, 00:20:37.253 "enable_ktls": false 00:20:37.253 } 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "method": "sock_impl_set_options", 00:20:37.253 "params": { 00:20:37.253 "impl_name": "posix", 00:20:37.253 "recv_buf_size": 2097152, 00:20:37.253 "send_buf_size": 2097152, 00:20:37.253 "enable_recv_pipe": true, 00:20:37.253 "enable_quickack": false, 00:20:37.253 "enable_placement_id": 0, 00:20:37.253 "enable_zerocopy_send_server": true, 00:20:37.253 "enable_zerocopy_send_client": false, 00:20:37.253 "zerocopy_threshold": 0, 00:20:37.253 "tls_version": 0, 00:20:37.253 "enable_ktls": false 00:20:37.253 } 00:20:37.253 } 00:20:37.253 ] 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "subsystem": "vmd", 00:20:37.253 "config": [] 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "subsystem": "accel", 00:20:37.253 "config": [ 00:20:37.253 { 00:20:37.253 "method": "accel_set_options", 00:20:37.253 "params": { 00:20:37.253 "small_cache_size": 128, 00:20:37.253 "large_cache_size": 16, 00:20:37.253 "task_count": 2048, 00:20:37.253 "sequence_count": 2048, 00:20:37.253 "buf_count": 2048 00:20:37.253 } 00:20:37.253 } 00:20:37.253 ] 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "subsystem": "bdev", 00:20:37.253 "config": [ 00:20:37.253 { 00:20:37.253 "method": "bdev_set_options", 00:20:37.253 "params": { 00:20:37.253 "bdev_io_pool_size": 65535, 00:20:37.253 "bdev_io_cache_size": 256, 00:20:37.253 "bdev_auto_examine": true, 00:20:37.253 "iobuf_small_cache_size": 128, 00:20:37.253 "iobuf_large_cache_size": 16 00:20:37.253 } 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "method": "bdev_raid_set_options", 00:20:37.253 "params": { 00:20:37.253 "process_window_size_kb": 1024, 00:20:37.253 "process_max_bandwidth_mb_sec": 0 00:20:37.253 } 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "method": "bdev_iscsi_set_options", 00:20:37.253 "params": { 00:20:37.253 "timeout_sec": 30 00:20:37.253 } 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "method": "bdev_nvme_set_options", 00:20:37.253 "params": { 00:20:37.253 "action_on_timeout": "none", 00:20:37.253 "timeout_us": 0, 00:20:37.253 "timeout_admin_us": 0, 00:20:37.253 "keep_alive_timeout_ms": 10000, 00:20:37.253 "arbitration_burst": 0, 00:20:37.253 "low_priority_weight": 0, 00:20:37.253 "medium_priority_weight": 0, 00:20:37.253 "high_priority_weight": 0, 00:20:37.253 "nvme_adminq_poll_period_us": 10000, 00:20:37.253 "nvme_ioq_poll_period_us": 0, 00:20:37.253 "io_queue_requests": 0, 00:20:37.253 "delay_cmd_submit": true, 00:20:37.253 "transport_retry_count": 4, 00:20:37.253 "bdev_retry_count": 3, 00:20:37.253 "transport_ack_timeout": 0, 00:20:37.253 "ctrlr_loss_timeout_sec": 0, 00:20:37.253 "reconnect_delay_sec": 0, 00:20:37.253 "fast_io_fail_timeout_sec": 0, 00:20:37.253 "disable_auto_failback": false, 00:20:37.253 "generate_uuids": false, 00:20:37.253 "transport_tos": 0, 00:20:37.253 "nvme_error_stat": false, 00:20:37.253 "rdma_srq_size": 0, 00:20:37.253 "io_path_stat": false, 00:20:37.253 "allow_accel_sequence": false, 00:20:37.253 "rdma_max_cq_size": 0, 00:20:37.253 "rdma_cm_event_timeout_ms": 0, 00:20:37.253 "dhchap_digests": [ 00:20:37.253 "sha256", 00:20:37.253 "sha384", 00:20:37.253 "sha512" 00:20:37.253 ], 00:20:37.253 "dhchap_dhgroups": [ 00:20:37.253 "null", 00:20:37.253 "ffdhe2048", 00:20:37.253 "ffdhe3072", 00:20:37.253 "ffdhe4096", 00:20:37.253 "ffdhe6144", 00:20:37.253 "ffdhe8192" 00:20:37.253 ] 00:20:37.253 } 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "method": "bdev_nvme_set_hotplug", 00:20:37.253 "params": { 00:20:37.253 "period_us": 100000, 00:20:37.253 "enable": false 00:20:37.253 } 00:20:37.253 }, 00:20:37.253 { 00:20:37.253 "method": "bdev_malloc_create", 00:20:37.253 "params": { 00:20:37.253 "name": "malloc0", 00:20:37.253 "num_blocks": 8192, 00:20:37.253 "block_size": 4096, 00:20:37.253 "physical_block_size": 4096, 00:20:37.253 "uuid": "bdc4fcd6-0fa7-48dc-a60a-9aac6dd6ce06", 00:20:37.254 "optimal_io_boundary": 0, 00:20:37.254 "md_size": 0, 00:20:37.254 "dif_type": 0, 00:20:37.254 "dif_is_head_of_md": false, 00:20:37.254 "dif_pi_format": 0 00:20:37.254 } 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "method": "bdev_wait_for_examine" 00:20:37.254 } 00:20:37.254 ] 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "subsystem": "nbd", 00:20:37.254 "config": [] 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "subsystem": "scheduler", 00:20:37.254 "config": [ 00:20:37.254 { 00:20:37.254 "method": "framework_set_scheduler", 00:20:37.254 "params": { 00:20:37.254 "name": "static" 00:20:37.254 } 00:20:37.254 } 00:20:37.254 ] 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "subsystem": "nvmf", 00:20:37.254 "config": [ 00:20:37.254 { 00:20:37.254 "method": "nvmf_set_config", 00:20:37.254 "params": { 00:20:37.254 "discovery_filter": "match_any", 00:20:37.254 "admin_cmd_passthru": { 00:20:37.254 "identify_ctrlr": false 00:20:37.254 }, 00:20:37.254 "dhchap_digests": [ 00:20:37.254 "sha256", 00:20:37.254 "sha384", 00:20:37.254 "sha512" 00:20:37.254 ], 00:20:37.254 "dhchap_dhgroups": [ 00:20:37.254 "null", 00:20:37.254 "ffdhe2048", 00:20:37.254 "ffdhe3072", 00:20:37.254 "ffdhe4096", 00:20:37.254 "ffdhe6144", 00:20:37.254 "ffdhe8192" 00:20:37.254 ] 00:20:37.254 } 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "method": "nvmf_set_max_subsystems", 00:20:37.254 "params": { 00:20:37.254 "max_subsystems": 1024 00:20:37.254 } 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "method": "nvmf_set_crdt", 00:20:37.254 "params": { 00:20:37.254 "crdt1": 0, 00:20:37.254 "crdt2": 0, 00:20:37.254 "crdt3": 0 00:20:37.254 } 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "method": "nvmf_create_transport", 00:20:37.254 "params": { 00:20:37.254 "trtype": "TCP", 00:20:37.254 "max_queue_depth": 128, 00:20:37.254 "max_io_qpairs_per_ctrlr": 127, 00:20:37.254 "in_capsule_data_size": 4096, 00:20:37.254 "max_io_size": 131072, 00:20:37.254 "io_unit_size": 131072, 00:20:37.254 "max_aq_depth": 128, 00:20:37.254 "num_shared_buffers": 511, 00:20:37.254 "buf_cache_size": 4294967295, 00:20:37.254 "dif_insert_or_strip": false, 00:20:37.254 "zcopy": false, 00:20:37.254 "c2h_success": false, 00:20:37.254 "sock_priority": 0, 00:20:37.254 "abort_timeout_sec": 1, 00:20:37.254 "ack_timeout": 0, 00:20:37.254 "data_wr_pool_size": 0 00:20:37.254 } 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "method": "nvmf_create_subsystem", 00:20:37.254 "params": { 00:20:37.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.254 "allow_any_host": false, 00:20:37.254 "serial_number": "00000000000000000000", 00:20:37.254 "model_number": "SPDK bdev Controller", 00:20:37.254 "max_namespaces": 32, 00:20:37.254 "min_cntlid": 1, 00:20:37.254 "max_cntlid": 65519, 00:20:37.254 "ana_reporting": false 00:20:37.254 } 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "method": "nvmf_subsystem_add_host", 00:20:37.254 "params": { 00:20:37.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.254 "host": "nqn.2016-06.io.spdk:host1", 00:20:37.254 "psk": "key0" 00:20:37.254 } 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "method": "nvmf_subsystem_add_ns", 00:20:37.254 "params": { 00:20:37.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.254 "namespace": { 00:20:37.254 "nsid": 1, 00:20:37.254 "bdev_name": "malloc0", 00:20:37.254 "nguid": "BDC4FCD60FA748DCA60A9AAC6DD6CE06", 00:20:37.254 "uuid": "bdc4fcd6-0fa7-48dc-a60a-9aac6dd6ce06", 00:20:37.254 "no_auto_visible": false 00:20:37.254 } 00:20:37.254 } 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "method": "nvmf_subsystem_add_listener", 00:20:37.254 "params": { 00:20:37.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.254 "listen_address": { 00:20:37.254 "trtype": "TCP", 00:20:37.254 "adrfam": "IPv4", 00:20:37.254 "traddr": "10.0.0.2", 00:20:37.254 "trsvcid": "4420" 00:20:37.254 }, 00:20:37.254 "secure_channel": false, 00:20:37.254 "sock_impl": "ssl" 00:20:37.254 } 00:20:37.254 } 00:20:37.254 ] 00:20:37.254 } 00:20:37.254 ] 00:20:37.254 }' 00:20:37.254 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:37.254 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:37.254 "subsystems": [ 00:20:37.254 { 00:20:37.254 "subsystem": "keyring", 00:20:37.254 "config": [ 00:20:37.254 { 00:20:37.254 "method": "keyring_file_add_key", 00:20:37.254 "params": { 00:20:37.254 "name": "key0", 00:20:37.254 "path": "/tmp/tmp.XeW1M5Ac8P" 00:20:37.254 } 00:20:37.254 } 00:20:37.254 ] 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "subsystem": "iobuf", 00:20:37.254 "config": [ 00:20:37.254 { 00:20:37.254 "method": "iobuf_set_options", 00:20:37.254 "params": { 00:20:37.254 "small_pool_count": 8192, 00:20:37.254 "large_pool_count": 1024, 00:20:37.254 "small_bufsize": 8192, 00:20:37.254 "large_bufsize": 135168, 00:20:37.254 "enable_numa": false 00:20:37.254 } 00:20:37.254 } 00:20:37.254 ] 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "subsystem": "sock", 00:20:37.254 "config": [ 00:20:37.254 { 00:20:37.254 "method": "sock_set_default_impl", 00:20:37.254 "params": { 00:20:37.254 "impl_name": "posix" 00:20:37.254 } 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "method": "sock_impl_set_options", 00:20:37.254 "params": { 00:20:37.254 "impl_name": "ssl", 00:20:37.254 "recv_buf_size": 4096, 00:20:37.254 "send_buf_size": 4096, 00:20:37.254 "enable_recv_pipe": true, 00:20:37.254 "enable_quickack": false, 00:20:37.254 "enable_placement_id": 0, 00:20:37.254 "enable_zerocopy_send_server": true, 00:20:37.254 "enable_zerocopy_send_client": false, 00:20:37.254 "zerocopy_threshold": 0, 00:20:37.254 "tls_version": 0, 00:20:37.254 "enable_ktls": false 00:20:37.254 } 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "method": "sock_impl_set_options", 00:20:37.254 "params": { 00:20:37.254 "impl_name": "posix", 00:20:37.254 "recv_buf_size": 2097152, 00:20:37.254 "send_buf_size": 2097152, 00:20:37.254 "enable_recv_pipe": true, 00:20:37.254 "enable_quickack": false, 00:20:37.254 "enable_placement_id": 0, 00:20:37.254 "enable_zerocopy_send_server": true, 00:20:37.254 "enable_zerocopy_send_client": false, 00:20:37.254 "zerocopy_threshold": 0, 00:20:37.254 "tls_version": 0, 00:20:37.254 "enable_ktls": false 00:20:37.254 } 00:20:37.254 } 00:20:37.254 ] 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "subsystem": "vmd", 00:20:37.254 "config": [] 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "subsystem": "accel", 00:20:37.254 "config": [ 00:20:37.254 { 00:20:37.254 "method": "accel_set_options", 00:20:37.254 "params": { 00:20:37.254 "small_cache_size": 128, 00:20:37.254 "large_cache_size": 16, 00:20:37.254 "task_count": 2048, 00:20:37.254 "sequence_count": 2048, 00:20:37.254 "buf_count": 2048 00:20:37.254 } 00:20:37.254 } 00:20:37.254 ] 00:20:37.254 }, 00:20:37.254 { 00:20:37.254 "subsystem": "bdev", 00:20:37.254 "config": [ 00:20:37.254 { 00:20:37.254 "method": "bdev_set_options", 00:20:37.254 "params": { 00:20:37.254 "bdev_io_pool_size": 65535, 00:20:37.254 "bdev_io_cache_size": 256, 00:20:37.254 "bdev_auto_examine": true, 00:20:37.254 "iobuf_small_cache_size": 128, 00:20:37.255 "iobuf_large_cache_size": 16 00:20:37.255 } 00:20:37.255 }, 00:20:37.255 { 00:20:37.255 "method": "bdev_raid_set_options", 00:20:37.255 "params": { 00:20:37.255 "process_window_size_kb": 1024, 00:20:37.255 "process_max_bandwidth_mb_sec": 0 00:20:37.255 } 00:20:37.255 }, 00:20:37.255 { 00:20:37.255 "method": "bdev_iscsi_set_options", 00:20:37.255 "params": { 00:20:37.255 "timeout_sec": 30 00:20:37.255 } 00:20:37.255 }, 00:20:37.255 { 00:20:37.255 "method": "bdev_nvme_set_options", 00:20:37.255 "params": { 00:20:37.255 "action_on_timeout": "none", 00:20:37.255 "timeout_us": 0, 00:20:37.255 "timeout_admin_us": 0, 00:20:37.255 "keep_alive_timeout_ms": 10000, 00:20:37.255 "arbitration_burst": 0, 00:20:37.255 "low_priority_weight": 0, 00:20:37.255 "medium_priority_weight": 0, 00:20:37.255 "high_priority_weight": 0, 00:20:37.255 "nvme_adminq_poll_period_us": 10000, 00:20:37.255 "nvme_ioq_poll_period_us": 0, 00:20:37.255 "io_queue_requests": 512, 00:20:37.255 "delay_cmd_submit": true, 00:20:37.255 "transport_retry_count": 4, 00:20:37.255 "bdev_retry_count": 3, 00:20:37.255 "transport_ack_timeout": 0, 00:20:37.255 "ctrlr_loss_timeout_sec": 0, 00:20:37.255 "reconnect_delay_sec": 0, 00:20:37.255 "fast_io_fail_timeout_sec": 0, 00:20:37.255 "disable_auto_failback": false, 00:20:37.255 "generate_uuids": false, 00:20:37.255 "transport_tos": 0, 00:20:37.255 "nvme_error_stat": false, 00:20:37.255 "rdma_srq_size": 0, 00:20:37.255 "io_path_stat": false, 00:20:37.255 "allow_accel_sequence": false, 00:20:37.255 "rdma_max_cq_size": 0, 00:20:37.255 "rdma_cm_event_timeout_ms": 0, 00:20:37.255 "dhchap_digests": [ 00:20:37.255 "sha256", 00:20:37.255 "sha384", 00:20:37.255 "sha512" 00:20:37.255 ], 00:20:37.255 "dhchap_dhgroups": [ 00:20:37.255 "null", 00:20:37.255 "ffdhe2048", 00:20:37.255 "ffdhe3072", 00:20:37.255 "ffdhe4096", 00:20:37.255 "ffdhe6144", 00:20:37.255 "ffdhe8192" 00:20:37.255 ] 00:20:37.255 } 00:20:37.255 }, 00:20:37.255 { 00:20:37.255 "method": "bdev_nvme_attach_controller", 00:20:37.255 "params": { 00:20:37.255 "name": "nvme0", 00:20:37.255 "trtype": "TCP", 00:20:37.255 "adrfam": "IPv4", 00:20:37.255 "traddr": "10.0.0.2", 00:20:37.255 "trsvcid": "4420", 00:20:37.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.255 "prchk_reftag": false, 00:20:37.255 "prchk_guard": false, 00:20:37.255 "ctrlr_loss_timeout_sec": 0, 00:20:37.255 "reconnect_delay_sec": 0, 00:20:37.255 "fast_io_fail_timeout_sec": 0, 00:20:37.255 "psk": "key0", 00:20:37.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.255 "hdgst": false, 00:20:37.255 "ddgst": false, 00:20:37.255 "multipath": "multipath" 00:20:37.255 } 00:20:37.255 }, 00:20:37.255 { 00:20:37.255 "method": "bdev_nvme_set_hotplug", 00:20:37.255 "params": { 00:20:37.255 "period_us": 100000, 00:20:37.255 "enable": false 00:20:37.255 } 00:20:37.255 }, 00:20:37.255 { 00:20:37.255 "method": "bdev_enable_histogram", 00:20:37.255 "params": { 00:20:37.255 "name": "nvme0n1", 00:20:37.255 "enable": true 00:20:37.255 } 00:20:37.255 }, 00:20:37.255 { 00:20:37.255 "method": "bdev_wait_for_examine" 00:20:37.255 } 00:20:37.255 ] 00:20:37.255 }, 00:20:37.255 { 00:20:37.255 "subsystem": "nbd", 00:20:37.255 "config": [] 00:20:37.255 } 00:20:37.255 ] 00:20:37.255 }' 00:20:37.255 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 761708 00:20:37.255 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 761708 ']' 00:20:37.255 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 761708 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 761708 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 761708' 00:20:37.543 killing process with pid 761708 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 761708 00:20:37.543 Received shutdown signal, test time was about 1.000000 seconds 00:20:37.543 00:20:37.543 Latency(us) 00:20:37.543 [2024-11-26T06:30:05.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.543 [2024-11-26T06:30:05.643Z] =================================================================================================================== 00:20:37.543 [2024-11-26T06:30:05.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 761708 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 761684 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 761684 ']' 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 761684 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 761684 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 761684' 00:20:37.543 killing process with pid 761684 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 761684 00:20:37.543 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 761684 00:20:37.802 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:37.802 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:37.802 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.802 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:37.802 "subsystems": [ 00:20:37.802 { 00:20:37.802 "subsystem": "keyring", 00:20:37.802 "config": [ 00:20:37.802 { 00:20:37.802 "method": "keyring_file_add_key", 00:20:37.802 "params": { 00:20:37.802 "name": "key0", 00:20:37.802 "path": "/tmp/tmp.XeW1M5Ac8P" 00:20:37.802 } 00:20:37.802 } 00:20:37.802 ] 00:20:37.802 }, 00:20:37.802 { 00:20:37.802 "subsystem": "iobuf", 00:20:37.802 "config": [ 00:20:37.802 { 00:20:37.802 "method": "iobuf_set_options", 00:20:37.802 "params": { 00:20:37.802 "small_pool_count": 8192, 00:20:37.802 "large_pool_count": 1024, 00:20:37.802 "small_bufsize": 8192, 00:20:37.802 "large_bufsize": 135168, 00:20:37.802 "enable_numa": false 00:20:37.802 } 00:20:37.802 } 00:20:37.802 ] 00:20:37.802 }, 00:20:37.802 { 00:20:37.802 "subsystem": "sock", 00:20:37.802 "config": [ 00:20:37.802 { 00:20:37.802 "method": "sock_set_default_impl", 00:20:37.802 "params": { 00:20:37.802 "impl_name": "posix" 00:20:37.802 } 00:20:37.802 }, 00:20:37.802 { 00:20:37.802 "method": "sock_impl_set_options", 00:20:37.802 "params": { 00:20:37.802 "impl_name": "ssl", 00:20:37.802 "recv_buf_size": 4096, 00:20:37.802 "send_buf_size": 4096, 00:20:37.802 "enable_recv_pipe": true, 00:20:37.802 "enable_quickack": false, 00:20:37.802 "enable_placement_id": 0, 00:20:37.802 "enable_zerocopy_send_server": true, 00:20:37.802 "enable_zerocopy_send_client": false, 00:20:37.802 "zerocopy_threshold": 0, 00:20:37.802 "tls_version": 0, 00:20:37.802 "enable_ktls": false 00:20:37.802 } 00:20:37.802 }, 00:20:37.802 { 00:20:37.802 "method": "sock_impl_set_options", 00:20:37.802 "params": { 00:20:37.802 "impl_name": "posix", 00:20:37.802 "recv_buf_size": 2097152, 00:20:37.802 "send_buf_size": 2097152, 00:20:37.802 "enable_recv_pipe": true, 00:20:37.802 "enable_quickack": false, 00:20:37.802 "enable_placement_id": 0, 00:20:37.802 "enable_zerocopy_send_server": true, 00:20:37.802 "enable_zerocopy_send_client": false, 00:20:37.802 "zerocopy_threshold": 0, 00:20:37.802 "tls_version": 0, 00:20:37.802 "enable_ktls": false 00:20:37.802 } 00:20:37.802 } 00:20:37.802 ] 00:20:37.802 }, 00:20:37.802 { 00:20:37.802 "subsystem": "vmd", 00:20:37.802 "config": [] 00:20:37.802 }, 00:20:37.802 { 00:20:37.802 "subsystem": "accel", 00:20:37.802 "config": [ 00:20:37.802 { 00:20:37.802 "method": "accel_set_options", 00:20:37.802 "params": { 00:20:37.802 "small_cache_size": 128, 00:20:37.802 "large_cache_size": 16, 00:20:37.802 "task_count": 2048, 00:20:37.802 "sequence_count": 2048, 00:20:37.802 "buf_count": 2048 00:20:37.802 } 00:20:37.802 } 00:20:37.802 ] 00:20:37.802 }, 00:20:37.802 { 00:20:37.802 "subsystem": "bdev", 00:20:37.802 "config": [ 00:20:37.802 { 00:20:37.802 "method": "bdev_set_options", 00:20:37.802 "params": { 00:20:37.802 "bdev_io_pool_size": 65535, 00:20:37.802 "bdev_io_cache_size": 256, 00:20:37.802 "bdev_auto_examine": true, 00:20:37.802 "iobuf_small_cache_size": 128, 00:20:37.802 "iobuf_large_cache_size": 16 00:20:37.802 } 00:20:37.802 }, 00:20:37.802 { 00:20:37.802 "method": "bdev_raid_set_options", 00:20:37.802 "params": { 00:20:37.802 "process_window_size_kb": 1024, 00:20:37.802 "process_max_bandwidth_mb_sec": 0 00:20:37.802 } 00:20:37.802 }, 00:20:37.802 { 00:20:37.802 "method": "bdev_iscsi_set_options", 00:20:37.802 "params": { 00:20:37.802 "timeout_sec": 30 00:20:37.802 } 00:20:37.802 }, 00:20:37.802 { 00:20:37.802 "method": "bdev_nvme_set_options", 00:20:37.802 "params": { 00:20:37.802 "action_on_timeout": "none", 00:20:37.802 "timeout_us": 0, 00:20:37.802 "timeout_admin_us": 0, 00:20:37.803 "keep_alive_timeout_ms": 10000, 00:20:37.803 "arbitration_burst": 0, 00:20:37.803 "low_priority_weight": 0, 00:20:37.803 "medium_priority_weight": 0, 00:20:37.803 "high_priority_weight": 0, 00:20:37.803 "nvme_adminq_poll_period_us": 10000, 00:20:37.803 "nvme_ioq_poll_period_us": 0, 00:20:37.803 "io_queue_requests": 0, 00:20:37.803 "delay_cmd_submit": true, 00:20:37.803 "transport_retry_count": 4, 00:20:37.803 "bdev_retry_count": 3, 00:20:37.803 "transport_ack_timeout": 0, 00:20:37.803 "ctrlr_loss_timeout_sec": 0, 00:20:37.803 "reconnect_delay_sec": 0, 00:20:37.803 "fast_io_fail_timeout_sec": 0, 00:20:37.803 "disable_auto_failback": false, 00:20:37.803 "generate_uuids": false, 00:20:37.803 "transport_tos": 0, 00:20:37.803 "nvme_error_stat": false, 00:20:37.803 "rdma_srq_size": 0, 00:20:37.803 "io_path_stat": false, 00:20:37.803 "allow_accel_sequence": false, 00:20:37.803 "rdma_max_cq_size": 0, 00:20:37.803 "rdma_cm_event_timeout_ms": 0, 00:20:37.803 "dhchap_digests": [ 00:20:37.803 "sha256", 00:20:37.803 "sha384", 00:20:37.803 "sha512" 00:20:37.803 ], 00:20:37.803 "dhchap_dhgroups": [ 00:20:37.803 "null", 00:20:37.803 "ffdhe2048", 00:20:37.803 "ffdhe3072", 00:20:37.803 "ffdhe4096", 00:20:37.803 "ffdhe6144", 00:20:37.803 "ffdhe8192" 00:20:37.803 ] 00:20:37.803 } 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "method": "bdev_nvme_set_hotplug", 00:20:37.803 "params": { 00:20:37.803 "period_us": 100000, 00:20:37.803 "enable": false 00:20:37.803 } 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "method": "bdev_malloc_create", 00:20:37.803 "params": { 00:20:37.803 "name": "malloc0", 00:20:37.803 "num_blocks": 8192, 00:20:37.803 "block_size": 4096, 00:20:37.803 "physical_block_size": 4096, 00:20:37.803 "uuid": "bdc4fcd6-0fa7-48dc-a60a-9aac6dd6ce06", 00:20:37.803 "optimal_io_boundary": 0, 00:20:37.803 "md_size": 0, 00:20:37.803 "dif_type": 0, 00:20:37.803 "dif_is_head_of_md": false, 00:20:37.803 "dif_pi_format": 0 00:20:37.803 } 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "method": "bdev_wait_for_examine" 00:20:37.803 } 00:20:37.803 ] 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "subsystem": "nbd", 00:20:37.803 "config": [] 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "subsystem": "scheduler", 00:20:37.803 "config": [ 00:20:37.803 { 00:20:37.803 "method": "framework_set_scheduler", 00:20:37.803 "params": { 00:20:37.803 "name": "static" 00:20:37.803 } 00:20:37.803 } 00:20:37.803 ] 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "subsystem": "nvmf", 00:20:37.803 "config": [ 00:20:37.803 { 00:20:37.803 "method": "nvmf_set_config", 00:20:37.803 "params": { 00:20:37.803 "discovery_filter": "match_any", 00:20:37.803 "admin_cmd_passthru": { 00:20:37.803 "identify_ctrlr": false 00:20:37.803 }, 00:20:37.803 "dhchap_digests": [ 00:20:37.803 "sha256", 00:20:37.803 "sha384", 00:20:37.803 "sha512" 00:20:37.803 ], 00:20:37.803 "dhchap_dhgroups": [ 00:20:37.803 "null", 00:20:37.803 "ffdhe2048", 00:20:37.803 "ffdhe3072", 00:20:37.803 "ffdhe4096", 00:20:37.803 "ffdhe6144", 00:20:37.803 "ffdhe8192" 00:20:37.803 ] 00:20:37.803 } 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "method": "nvmf_set_max_subsystems", 00:20:37.803 "params": { 00:20:37.803 "max_subsystems": 1024 00:20:37.803 } 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "method": "nvmf_set_crdt", 00:20:37.803 "params": { 00:20:37.803 "crdt1": 0, 00:20:37.803 "crdt2": 0, 00:20:37.803 "crdt3": 0 00:20:37.803 } 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "method": "nvmf_create_transport", 00:20:37.803 "params": { 00:20:37.803 "trtype": "TCP", 00:20:37.803 "max_queue_depth": 128, 00:20:37.803 "max_io_qpairs_per_ctrlr": 127, 00:20:37.803 "in_capsule_data_size": 4096, 00:20:37.803 "max_io_size": 131072, 00:20:37.803 "io_unit_size": 131072, 00:20:37.803 "max_aq_depth": 128, 00:20:37.803 "num_shared_buffers": 511, 00:20:37.803 "buf_cache_size": 4294967295, 00:20:37.803 "dif_insert_or_strip": false, 00:20:37.803 "zcopy": false, 00:20:37.803 "c2h_success": false, 00:20:37.803 "sock_priority": 0, 00:20:37.803 "abort_timeout_sec": 1, 00:20:37.803 "ack_timeout": 0, 00:20:37.803 "data_wr_pool_size": 0 00:20:37.803 } 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "method": "nvmf_create_subsystem", 00:20:37.803 "params": { 00:20:37.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.803 "allow_any_host": false, 00:20:37.803 "serial_number": "00000000000000000000", 00:20:37.803 "model_number": "SPDK bdev Controller", 00:20:37.803 "max_namespaces": 32, 00:20:37.803 "min_cntlid": 1, 00:20:37.803 "max_cntlid": 65519, 00:20:37.803 "ana_reporting": false 00:20:37.803 } 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "method": "nvmf_subsystem_add_host", 00:20:37.803 "params": { 00:20:37.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.803 "host": "nqn.2016-06.io.spdk:host1", 00:20:37.803 "psk": "key0" 00:20:37.803 } 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "method": "nvmf_subsystem_add_ns", 00:20:37.803 "params": { 00:20:37.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.803 "namespace": { 00:20:37.803 "nsid": 1, 00:20:37.803 "bdev_name": "malloc0", 00:20:37.803 "nguid": "BDC4FCD60FA748DCA60A9AAC6DD6CE06", 00:20:37.803 "uuid": "bdc4fcd6-0fa7-48dc-a60a-9aac6dd6ce06", 00:20:37.803 "no_auto_visible": false 00:20:37.803 } 00:20:37.803 } 00:20:37.803 }, 00:20:37.803 { 00:20:37.803 "method": "nvmf_subsystem_add_listener", 00:20:37.803 "params": { 00:20:37.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.803 "listen_address": { 00:20:37.803 "trtype": "TCP", 00:20:37.803 "adrfam": "IPv4", 00:20:37.803 "traddr": "10.0.0.2", 00:20:37.803 "trsvcid": "4420" 00:20:37.803 }, 00:20:37.803 "secure_channel": false, 00:20:37.803 "sock_impl": "ssl" 00:20:37.803 } 00:20:37.803 } 00:20:37.803 ] 00:20:37.803 } 00:20:37.803 ] 00:20:37.803 }' 00:20:37.803 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.803 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=762181 00:20:37.803 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 762181 00:20:37.803 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:37.803 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 762181 ']' 00:20:37.803 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.804 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.804 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.804 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.804 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.804 [2024-11-26 07:30:05.837563] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:37.804 [2024-11-26 07:30:05.837610] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.062 [2024-11-26 07:30:05.903967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.062 [2024-11-26 07:30:05.945230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.062 [2024-11-26 07:30:05.945266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.062 [2024-11-26 07:30:05.945273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.062 [2024-11-26 07:30:05.945282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.062 [2024-11-26 07:30:05.945287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.062 [2024-11-26 07:30:05.945883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.322 [2024-11-26 07:30:06.160183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.322 [2024-11-26 07:30:06.192208] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.322 [2024-11-26 07:30:06.192427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.581 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.581 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:38.581 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.581 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.581 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.840 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.840 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=762307 00:20:38.840 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 762307 /var/tmp/bdevperf.sock 00:20:38.840 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 762307 ']' 00:20:38.840 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.840 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.840 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:38.840 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.840 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.840 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:38.840 "subsystems": [ 00:20:38.840 { 00:20:38.840 "subsystem": "keyring", 00:20:38.840 "config": [ 00:20:38.840 { 00:20:38.840 "method": "keyring_file_add_key", 00:20:38.840 "params": { 00:20:38.840 "name": "key0", 00:20:38.840 "path": "/tmp/tmp.XeW1M5Ac8P" 00:20:38.840 } 00:20:38.840 } 00:20:38.840 ] 00:20:38.840 }, 00:20:38.840 { 00:20:38.840 "subsystem": "iobuf", 00:20:38.840 "config": [ 00:20:38.840 { 00:20:38.840 "method": "iobuf_set_options", 00:20:38.840 "params": { 00:20:38.840 "small_pool_count": 8192, 00:20:38.840 "large_pool_count": 1024, 00:20:38.840 "small_bufsize": 8192, 00:20:38.840 "large_bufsize": 135168, 00:20:38.840 "enable_numa": false 00:20:38.840 } 00:20:38.840 } 00:20:38.840 ] 00:20:38.840 }, 00:20:38.840 { 00:20:38.840 "subsystem": "sock", 00:20:38.840 "config": [ 00:20:38.840 { 00:20:38.840 "method": "sock_set_default_impl", 00:20:38.840 "params": { 00:20:38.840 "impl_name": "posix" 00:20:38.840 } 00:20:38.840 }, 00:20:38.840 { 00:20:38.840 "method": "sock_impl_set_options", 00:20:38.840 "params": { 00:20:38.840 "impl_name": "ssl", 00:20:38.840 "recv_buf_size": 4096, 00:20:38.840 "send_buf_size": 4096, 00:20:38.840 "enable_recv_pipe": true, 00:20:38.840 "enable_quickack": false, 00:20:38.840 "enable_placement_id": 0, 00:20:38.840 "enable_zerocopy_send_server": true, 00:20:38.840 "enable_zerocopy_send_client": false, 00:20:38.840 "zerocopy_threshold": 0, 00:20:38.840 "tls_version": 0, 00:20:38.840 "enable_ktls": false 00:20:38.840 } 00:20:38.840 }, 00:20:38.840 { 00:20:38.840 "method": "sock_impl_set_options", 00:20:38.840 "params": { 00:20:38.840 "impl_name": "posix", 00:20:38.840 "recv_buf_size": 2097152, 00:20:38.840 "send_buf_size": 2097152, 00:20:38.840 "enable_recv_pipe": true, 00:20:38.840 "enable_quickack": false, 00:20:38.840 "enable_placement_id": 0, 00:20:38.840 "enable_zerocopy_send_server": true, 00:20:38.840 "enable_zerocopy_send_client": false, 00:20:38.840 "zerocopy_threshold": 0, 00:20:38.840 "tls_version": 0, 00:20:38.840 "enable_ktls": false 00:20:38.840 } 00:20:38.840 } 00:20:38.840 ] 00:20:38.840 }, 00:20:38.840 { 00:20:38.840 "subsystem": "vmd", 00:20:38.840 "config": [] 00:20:38.840 }, 00:20:38.840 { 00:20:38.840 "subsystem": "accel", 00:20:38.840 "config": [ 00:20:38.840 { 00:20:38.840 "method": "accel_set_options", 00:20:38.840 "params": { 00:20:38.840 "small_cache_size": 128, 00:20:38.840 "large_cache_size": 16, 00:20:38.840 "task_count": 2048, 00:20:38.840 "sequence_count": 2048, 00:20:38.840 "buf_count": 2048 00:20:38.840 } 00:20:38.840 } 00:20:38.840 ] 00:20:38.840 }, 00:20:38.840 { 00:20:38.840 "subsystem": "bdev", 00:20:38.840 "config": [ 00:20:38.840 { 00:20:38.840 "method": "bdev_set_options", 00:20:38.840 "params": { 00:20:38.840 "bdev_io_pool_size": 65535, 00:20:38.840 "bdev_io_cache_size": 256, 00:20:38.840 "bdev_auto_examine": true, 00:20:38.840 "iobuf_small_cache_size": 128, 00:20:38.840 "iobuf_large_cache_size": 16 00:20:38.840 } 00:20:38.840 }, 00:20:38.840 { 00:20:38.840 "method": "bdev_raid_set_options", 00:20:38.840 "params": { 00:20:38.840 "process_window_size_kb": 1024, 00:20:38.840 "process_max_bandwidth_mb_sec": 0 00:20:38.840 } 00:20:38.840 }, 00:20:38.840 { 00:20:38.840 "method": "bdev_iscsi_set_options", 00:20:38.840 "params": { 00:20:38.840 "timeout_sec": 30 00:20:38.840 } 00:20:38.840 }, 00:20:38.840 { 00:20:38.840 "method": "bdev_nvme_set_options", 00:20:38.840 "params": { 00:20:38.840 "action_on_timeout": "none", 00:20:38.840 "timeout_us": 0, 00:20:38.840 "timeout_admin_us": 0, 00:20:38.840 "keep_alive_timeout_ms": 10000, 00:20:38.840 "arbitration_burst": 0, 00:20:38.840 "low_priority_weight": 0, 00:20:38.840 "medium_priority_weight": 0, 00:20:38.840 "high_priority_weight": 0, 00:20:38.840 "nvme_adminq_poll_period_us": 10000, 00:20:38.840 "nvme_ioq_poll_period_us": 0, 00:20:38.840 "io_queue_requests": 512, 00:20:38.840 "delay_cmd_submit": true, 00:20:38.840 "transport_retry_count": 4, 00:20:38.840 "bdev_retry_count": 3, 00:20:38.840 "transport_ack_timeout": 0, 00:20:38.841 "ctrlr_loss_timeout_sec": 0, 00:20:38.841 "reconnect_delay_sec": 0, 00:20:38.841 "fast_io_fail_timeout_sec": 0, 00:20:38.841 "disable_auto_failback": false, 00:20:38.841 "generate_uuids": false, 00:20:38.841 "transport_tos": 0, 00:20:38.841 "nvme_error_stat": false, 00:20:38.841 "rdma_srq_size": 0, 00:20:38.841 "io_path_stat": false, 00:20:38.841 "allow_accel_sequence": false, 00:20:38.841 "rdma_max_cq_size": 0, 00:20:38.841 "rdma_cm_event_timeout_ms": 0, 00:20:38.841 "dhchap_digests": [ 00:20:38.841 "sha256", 00:20:38.841 "sha384", 00:20:38.841 "sha512" 00:20:38.841 ], 00:20:38.841 "dhchap_dhgroups": [ 00:20:38.841 "null", 00:20:38.841 "ffdhe2048", 00:20:38.841 "ffdhe3072", 00:20:38.841 "ffdhe4096", 00:20:38.841 "ffdhe6144", 00:20:38.841 "ffdhe8192" 00:20:38.841 ] 00:20:38.841 } 00:20:38.841 }, 00:20:38.841 { 00:20:38.841 "method": "bdev_nvme_attach_controller", 00:20:38.841 "params": { 00:20:38.841 "name": "nvme0", 00:20:38.841 "trtype": "TCP", 00:20:38.841 "adrfam": "IPv4", 00:20:38.841 "traddr": "10.0.0.2", 00:20:38.841 "trsvcid": "4420", 00:20:38.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.841 "prchk_reftag": false, 00:20:38.841 "prchk_guard": false, 00:20:38.841 "ctrlr_loss_timeout_sec": 0, 00:20:38.841 "reconnect_delay_sec": 0, 00:20:38.841 "fast_io_fail_timeout_sec": 0, 00:20:38.841 "psk": "key0", 00:20:38.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.841 "hdgst": false, 00:20:38.841 "ddgst": false, 00:20:38.841 "multipath": "multipath" 00:20:38.841 } 00:20:38.841 }, 00:20:38.841 { 00:20:38.841 "method": "bdev_nvme_set_hotplug", 00:20:38.841 "params": { 00:20:38.841 "period_us": 100000, 00:20:38.841 "enable": false 00:20:38.841 } 00:20:38.841 }, 00:20:38.841 { 00:20:38.841 "method": "bdev_enable_histogram", 00:20:38.841 "params": { 00:20:38.841 "name": "nvme0n1", 00:20:38.841 "enable": true 00:20:38.841 } 00:20:38.841 }, 00:20:38.841 { 00:20:38.841 "method": "bdev_wait_for_examine" 00:20:38.841 } 00:20:38.841 ] 00:20:38.841 }, 00:20:38.841 { 00:20:38.841 "subsystem": "nbd", 00:20:38.841 "config": [] 00:20:38.841 } 00:20:38.841 ] 00:20:38.841 }' 00:20:38.841 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.841 [2024-11-26 07:30:06.757704] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:38.841 [2024-11-26 07:30:06.757755] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762307 ] 00:20:38.841 [2024-11-26 07:30:06.820059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.841 [2024-11-26 07:30:06.861191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.100 [2024-11-26 07:30:07.015169] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.666 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.666 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:39.666 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:39.666 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:39.924 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.924 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.924 Running I/O for 1 seconds... 00:20:40.859 5316.00 IOPS, 20.77 MiB/s 00:20:40.859 Latency(us) 00:20:40.859 [2024-11-26T06:30:08.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.859 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:40.859 Verification LBA range: start 0x0 length 0x2000 00:20:40.859 nvme0n1 : 1.01 5367.33 20.97 0.00 0.00 23664.97 5014.93 20857.54 00:20:40.859 [2024-11-26T06:30:08.959Z] =================================================================================================================== 00:20:40.859 [2024-11-26T06:30:08.959Z] Total : 5367.33 20.97 0.00 0.00 23664.97 5014.93 20857.54 00:20:40.859 { 00:20:40.859 "results": [ 00:20:40.859 { 00:20:40.859 "job": "nvme0n1", 00:20:40.859 "core_mask": "0x2", 00:20:40.859 "workload": "verify", 00:20:40.859 "status": "finished", 00:20:40.859 "verify_range": { 00:20:40.859 "start": 0, 00:20:40.859 "length": 8192 00:20:40.859 }, 00:20:40.859 "queue_depth": 128, 00:20:40.859 "io_size": 4096, 00:20:40.859 "runtime": 1.014471, 00:20:40.859 "iops": 5367.329376591347, 00:20:40.859 "mibps": 20.96613037730995, 00:20:40.859 "io_failed": 0, 00:20:40.859 "io_timeout": 0, 00:20:40.859 "avg_latency_us": 23664.966656286182, 00:20:40.859 "min_latency_us": 5014.928695652174, 00:20:40.859 "max_latency_us": 20857.544347826086 00:20:40.859 } 00:20:40.859 ], 00:20:40.859 "core_count": 1 00:20:40.859 } 00:20:40.859 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:40.859 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:40.859 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:40.859 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:40.859 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:40.859 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:40.859 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:40.859 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:40.859 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:40.859 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:40.859 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:40.859 nvmf_trace.0 00:20:41.117 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:41.117 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 762307 00:20:41.117 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 762307 ']' 00:20:41.117 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 762307 00:20:41.117 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:41.117 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.117 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 762307 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 762307' 00:20:41.117 killing process with pid 762307 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 762307 00:20:41.117 Received shutdown signal, test time was about 1.000000 seconds 00:20:41.117 00:20:41.117 Latency(us) 00:20:41.117 [2024-11-26T06:30:09.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.117 [2024-11-26T06:30:09.217Z] =================================================================================================================== 00:20:41.117 [2024-11-26T06:30:09.217Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 762307 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.117 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.117 rmmod nvme_tcp 00:20:41.374 rmmod nvme_fabrics 00:20:41.374 rmmod nvme_keyring 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 762181 ']' 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 762181 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 762181 ']' 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 762181 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 762181 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 762181' 00:20:41.374 killing process with pid 762181 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 762181 00:20:41.374 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 762181 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.631 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.532 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:43.532 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.UzWW5fhBhk /tmp/tmp.i4ib7ovh3V /tmp/tmp.XeW1M5Ac8P 00:20:43.532 00:20:43.532 real 1m18.554s 00:20:43.532 user 2m0.056s 00:20:43.532 sys 0m30.605s 00:20:43.532 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.532 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.532 ************************************ 00:20:43.532 END TEST nvmf_tls 00:20:43.532 ************************************ 00:20:43.532 07:30:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:43.532 07:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:43.532 07:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.532 07:30:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:43.532 ************************************ 00:20:43.532 START TEST nvmf_fips 00:20:43.532 ************************************ 00:20:43.532 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:43.792 * Looking for test storage... 00:20:43.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:43.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.792 --rc genhtml_branch_coverage=1 00:20:43.792 --rc genhtml_function_coverage=1 00:20:43.792 --rc genhtml_legend=1 00:20:43.792 --rc geninfo_all_blocks=1 00:20:43.792 --rc geninfo_unexecuted_blocks=1 00:20:43.792 00:20:43.792 ' 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:43.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.792 --rc genhtml_branch_coverage=1 00:20:43.792 --rc genhtml_function_coverage=1 00:20:43.792 --rc genhtml_legend=1 00:20:43.792 --rc geninfo_all_blocks=1 00:20:43.792 --rc geninfo_unexecuted_blocks=1 00:20:43.792 00:20:43.792 ' 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:43.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.792 --rc genhtml_branch_coverage=1 00:20:43.792 --rc genhtml_function_coverage=1 00:20:43.792 --rc genhtml_legend=1 00:20:43.792 --rc geninfo_all_blocks=1 00:20:43.792 --rc geninfo_unexecuted_blocks=1 00:20:43.792 00:20:43.792 ' 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:43.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.792 --rc genhtml_branch_coverage=1 00:20:43.792 --rc genhtml_function_coverage=1 00:20:43.792 --rc genhtml_legend=1 00:20:43.792 --rc geninfo_all_blocks=1 00:20:43.792 --rc geninfo_unexecuted_blocks=1 00:20:43.792 00:20:43.792 ' 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:43.792 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:43.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:43.793 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:44.052 Error setting digest 00:20:44.052 40F24542087F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:44.052 40F24542087F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.052 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:44.053 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:44.053 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.053 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:49.318 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:49.318 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:49.318 Found net devices under 0000:86:00.0: cvl_0_0 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:49.318 Found net devices under 0000:86:00.1: cvl_0_1 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.318 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:49.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:20:49.319 00:20:49.319 --- 10.0.0.2 ping statistics --- 00:20:49.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.319 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:20:49.319 00:20:49.319 --- 10.0.0.1 ping statistics --- 00:20:49.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.319 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:49.319 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=766625 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 766625 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 766625 ']' 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.577 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:49.577 [2024-11-26 07:30:17.516512] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:49.577 [2024-11-26 07:30:17.516559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.577 [2024-11-26 07:30:17.582602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.577 [2024-11-26 07:30:17.624272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.577 [2024-11-26 07:30:17.624308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.577 [2024-11-26 07:30:17.624318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.577 [2024-11-26 07:30:17.624324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.577 [2024-11-26 07:30:17.624329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.577 [2024-11-26 07:30:17.624895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Kzc 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Kzc 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Kzc 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Kzc 00:20:50.512 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:50.512 [2024-11-26 07:30:18.542558] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.512 [2024-11-26 07:30:18.558561] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.512 [2024-11-26 07:30:18.558782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.512 malloc0 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=766874 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 766874 /var/tmp/bdevperf.sock 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 766874 ']' 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:50.770 [2024-11-26 07:30:18.676575] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:20:50.770 [2024-11-26 07:30:18.676627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766874 ] 00:20:50.770 [2024-11-26 07:30:18.733960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.770 [2024-11-26 07:30:18.775276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:50.770 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Kzc 00:20:51.028 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.285 [2024-11-26 07:30:19.218416] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.285 TLSTESTn1 00:20:51.285 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:51.543 Running I/O for 10 seconds... 00:20:53.411 5209.00 IOPS, 20.35 MiB/s [2024-11-26T06:30:22.445Z] 5267.50 IOPS, 20.58 MiB/s [2024-11-26T06:30:23.819Z] 5339.33 IOPS, 20.86 MiB/s [2024-11-26T06:30:24.753Z] 5361.50 IOPS, 20.94 MiB/s [2024-11-26T06:30:25.687Z] 5387.00 IOPS, 21.04 MiB/s [2024-11-26T06:30:26.622Z] 5404.33 IOPS, 21.11 MiB/s [2024-11-26T06:30:27.556Z] 5417.00 IOPS, 21.16 MiB/s [2024-11-26T06:30:28.490Z] 5435.62 IOPS, 21.23 MiB/s [2024-11-26T06:30:29.866Z] 5390.44 IOPS, 21.06 MiB/s [2024-11-26T06:30:29.866Z] 5274.30 IOPS, 20.60 MiB/s 00:21:01.766 Latency(us) 00:21:01.766 [2024-11-26T06:30:29.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.766 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.766 Verification LBA range: start 0x0 length 0x2000 00:21:01.766 TLSTESTn1 : 10.02 5274.51 20.60 0.00 0.00 24223.12 6211.67 36700.16 00:21:01.766 [2024-11-26T06:30:29.866Z] =================================================================================================================== 00:21:01.766 [2024-11-26T06:30:29.866Z] Total : 5274.51 20.60 0.00 0.00 24223.12 6211.67 36700.16 00:21:01.766 { 00:21:01.766 "results": [ 00:21:01.766 { 00:21:01.766 "job": "TLSTESTn1", 00:21:01.766 "core_mask": "0x4", 00:21:01.766 "workload": "verify", 00:21:01.766 "status": "finished", 00:21:01.766 "verify_range": { 00:21:01.766 "start": 0, 00:21:01.766 "length": 8192 00:21:01.766 }, 00:21:01.766 "queue_depth": 128, 00:21:01.766 "io_size": 4096, 00:21:01.766 "runtime": 10.023671, 00:21:01.766 "iops": 5274.514696262477, 00:21:01.766 "mibps": 20.6035730322753, 00:21:01.766 "io_failed": 0, 00:21:01.766 "io_timeout": 0, 00:21:01.766 "avg_latency_us": 24223.12360931242, 00:21:01.766 "min_latency_us": 6211.673043478261, 00:21:01.766 "max_latency_us": 36700.16 00:21:01.766 } 00:21:01.766 ], 00:21:01.766 "core_count": 1 00:21:01.766 } 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:01.766 nvmf_trace.0 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 766874 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 766874 ']' 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 766874 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766874 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766874' 00:21:01.766 killing process with pid 766874 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 766874 00:21:01.766 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.766 00:21:01.766 Latency(us) 00:21:01.766 [2024-11-26T06:30:29.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.766 [2024-11-26T06:30:29.866Z] =================================================================================================================== 00:21:01.766 [2024-11-26T06:30:29.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 766874 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.766 rmmod nvme_tcp 00:21:01.766 rmmod nvme_fabrics 00:21:01.766 rmmod nvme_keyring 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 766625 ']' 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 766625 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 766625 ']' 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 766625 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.766 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 766625 00:21:02.025 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.025 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.025 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 766625' 00:21:02.025 killing process with pid 766625 00:21:02.025 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 766625 00:21:02.025 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 766625 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.025 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Kzc 00:21:04.557 00:21:04.557 real 0m20.510s 00:21:04.557 user 0m21.749s 00:21:04.557 sys 0m9.302s 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:04.557 ************************************ 00:21:04.557 END TEST nvmf_fips 00:21:04.557 ************************************ 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.557 ************************************ 00:21:04.557 START TEST nvmf_control_msg_list 00:21:04.557 ************************************ 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:04.557 * Looking for test storage... 00:21:04.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:04.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.557 --rc genhtml_branch_coverage=1 00:21:04.557 --rc genhtml_function_coverage=1 00:21:04.557 --rc genhtml_legend=1 00:21:04.557 --rc geninfo_all_blocks=1 00:21:04.557 --rc geninfo_unexecuted_blocks=1 00:21:04.557 00:21:04.557 ' 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:04.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.557 --rc genhtml_branch_coverage=1 00:21:04.557 --rc genhtml_function_coverage=1 00:21:04.557 --rc genhtml_legend=1 00:21:04.557 --rc geninfo_all_blocks=1 00:21:04.557 --rc geninfo_unexecuted_blocks=1 00:21:04.557 00:21:04.557 ' 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:04.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.557 --rc genhtml_branch_coverage=1 00:21:04.557 --rc genhtml_function_coverage=1 00:21:04.557 --rc genhtml_legend=1 00:21:04.557 --rc geninfo_all_blocks=1 00:21:04.557 --rc geninfo_unexecuted_blocks=1 00:21:04.557 00:21:04.557 ' 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:04.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.557 --rc genhtml_branch_coverage=1 00:21:04.557 --rc genhtml_function_coverage=1 00:21:04.557 --rc genhtml_legend=1 00:21:04.557 --rc geninfo_all_blocks=1 00:21:04.557 --rc geninfo_unexecuted_blocks=1 00:21:04.557 00:21:04.557 ' 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.557 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.558 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:09.832 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:09.832 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:09.832 Found net devices under 0000:86:00.0: cvl_0_0 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:09.832 Found net devices under 0000:86:00.1: cvl_0_1 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:09.832 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:09.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:21:09.832 00:21:09.832 --- 10.0.0.2 ping statistics --- 00:21:09.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.833 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:09.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:21:09.833 00:21:09.833 --- 10.0.0.1 ping statistics --- 00:21:09.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.833 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=772074 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 772074 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 772074 ']' 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.833 07:30:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:09.833 [2024-11-26 07:30:37.847175] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:21:09.833 [2024-11-26 07:30:37.847220] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.833 [2024-11-26 07:30:37.913490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.092 [2024-11-26 07:30:37.955303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.092 [2024-11-26 07:30:37.955338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.092 [2024-11-26 07:30:37.955345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.092 [2024-11-26 07:30:37.955351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.092 [2024-11-26 07:30:37.955356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.092 [2024-11-26 07:30:37.955953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.092 [2024-11-26 07:30:38.082831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.092 Malloc0 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.092 [2024-11-26 07:30:38.119086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=772256 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=772257 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=772258 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 772256 00:21:10.092 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:10.350 [2024-11-26 07:30:38.187580] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:10.350 [2024-11-26 07:30:38.187764] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:10.350 [2024-11-26 07:30:38.197387] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:11.286 Initializing NVMe Controllers 00:21:11.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:11.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:11.286 Initialization complete. Launching workers. 00:21:11.286 ======================================================== 00:21:11.286 Latency(us) 00:21:11.286 Device Information : IOPS MiB/s Average min max 00:21:11.286 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5201.00 20.32 191.90 138.49 461.70 00:21:11.286 ======================================================== 00:21:11.286 Total : 5201.00 20.32 191.90 138.49 461.70 00:21:11.286 00:21:11.286 Initializing NVMe Controllers 00:21:11.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:11.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:11.286 Initialization complete. Launching workers. 00:21:11.286 ======================================================== 00:21:11.286 Latency(us) 00:21:11.286 Device Information : IOPS MiB/s Average min max 00:21:11.286 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5165.00 20.18 193.27 135.05 445.30 00:21:11.286 ======================================================== 00:21:11.286 Total : 5165.00 20.18 193.27 135.05 445.30 00:21:11.286 00:21:11.286 [2024-11-26 07:30:39.331662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5dd50 is same with the state(6) to be set 00:21:11.547 Initializing NVMe Controllers 00:21:11.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:11.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:11.547 Initialization complete. Launching workers. 00:21:11.547 ======================================================== 00:21:11.547 Latency(us) 00:21:11.547 Device Information : IOPS MiB/s Average min max 00:21:11.547 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 329.00 1.29 3114.47 244.96 42105.71 00:21:11.547 ======================================================== 00:21:11.547 Total : 329.00 1.29 3114.47 244.96 42105.71 00:21:11.547 00:21:11.547 [2024-11-26 07:30:39.437337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d880 is same with the state(6) to be set 00:21:11.547 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 772257 00:21:11.547 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 772258 00:21:11.547 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:11.547 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:11.547 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:11.547 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:11.547 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.547 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:11.547 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.547 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.547 rmmod nvme_tcp 00:21:11.547 rmmod nvme_fabrics 00:21:11.547 rmmod nvme_keyring 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 772074 ']' 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 772074 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 772074 ']' 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 772074 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772074 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772074' 00:21:11.548 killing process with pid 772074 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 772074 00:21:11.548 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 772074 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.807 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.341 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:14.341 00:21:14.341 real 0m9.616s 00:21:14.341 user 0m6.518s 00:21:14.341 sys 0m5.142s 00:21:14.341 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.341 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:14.341 ************************************ 00:21:14.341 END TEST nvmf_control_msg_list 00:21:14.341 ************************************ 00:21:14.341 07:30:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:14.341 07:30:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:14.341 07:30:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.341 07:30:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:14.341 ************************************ 00:21:14.341 START TEST nvmf_wait_for_buf 00:21:14.341 ************************************ 00:21:14.341 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:14.341 * Looking for test storage... 00:21:14.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:14.341 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:14.341 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:14.341 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:14.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.341 --rc genhtml_branch_coverage=1 00:21:14.341 --rc genhtml_function_coverage=1 00:21:14.341 --rc genhtml_legend=1 00:21:14.341 --rc geninfo_all_blocks=1 00:21:14.341 --rc geninfo_unexecuted_blocks=1 00:21:14.341 00:21:14.341 ' 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:14.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.341 --rc genhtml_branch_coverage=1 00:21:14.341 --rc genhtml_function_coverage=1 00:21:14.341 --rc genhtml_legend=1 00:21:14.341 --rc geninfo_all_blocks=1 00:21:14.341 --rc geninfo_unexecuted_blocks=1 00:21:14.341 00:21:14.341 ' 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:14.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.341 --rc genhtml_branch_coverage=1 00:21:14.341 --rc genhtml_function_coverage=1 00:21:14.341 --rc genhtml_legend=1 00:21:14.341 --rc geninfo_all_blocks=1 00:21:14.341 --rc geninfo_unexecuted_blocks=1 00:21:14.341 00:21:14.341 ' 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:14.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.341 --rc genhtml_branch_coverage=1 00:21:14.341 --rc genhtml_function_coverage=1 00:21:14.341 --rc genhtml_legend=1 00:21:14.341 --rc geninfo_all_blocks=1 00:21:14.341 --rc geninfo_unexecuted_blocks=1 00:21:14.341 00:21:14.341 ' 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.341 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:14.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:14.342 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:19.618 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:19.618 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.618 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:19.619 Found net devices under 0000:86:00.0: cvl_0_0 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:19.619 Found net devices under 0000:86:00.1: cvl_0_1 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.619 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:21:19.878 00:21:19.878 --- 10.0.0.2 ping statistics --- 00:21:19.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.878 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:21:19.878 00:21:19.878 --- 10.0.0.1 ping statistics --- 00:21:19.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.878 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=775926 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 775926 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 775926 ']' 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.878 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.878 [2024-11-26 07:30:47.868785] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:21:19.878 [2024-11-26 07:30:47.868835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.878 [2024-11-26 07:30:47.935136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.136 [2024-11-26 07:30:47.975346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.136 [2024-11-26 07:30:47.975396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.136 [2024-11-26 07:30:47.975404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.136 [2024-11-26 07:30:47.975410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.136 [2024-11-26 07:30:47.975415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.137 [2024-11-26 07:30:47.975996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:20.137 Malloc0 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:20.137 [2024-11-26 07:30:48.161150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:20.137 [2024-11-26 07:30:48.189359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.137 07:30:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:20.395 [2024-11-26 07:30:48.275036] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:21.767 Initializing NVMe Controllers 00:21:21.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:21.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:21.767 Initialization complete. Launching workers. 00:21:21.767 ======================================================== 00:21:21.767 Latency(us) 00:21:21.767 Device Information : IOPS MiB/s Average min max 00:21:21.767 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.93 15.99 32365.77 7263.44 65410.61 00:21:21.767 ======================================================== 00:21:21.767 Total : 127.93 15.99 32365.77 7263.44 65410.61 00:21:21.767 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.767 rmmod nvme_tcp 00:21:21.767 rmmod nvme_fabrics 00:21:21.767 rmmod nvme_keyring 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 775926 ']' 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 775926 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 775926 ']' 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 775926 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775926 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775926' 00:21:21.767 killing process with pid 775926 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 775926 00:21:21.767 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 775926 00:21:22.026 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:22.027 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:22.027 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:22.027 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:22.027 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:22.027 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:22.027 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:22.027 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:22.027 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:22.027 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.027 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.027 07:30:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.563 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:24.563 00:21:24.563 real 0m10.175s 00:21:24.563 user 0m3.895s 00:21:24.563 sys 0m4.751s 00:21:24.563 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.563 07:30:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.563 ************************************ 00:21:24.563 END TEST nvmf_wait_for_buf 00:21:24.563 ************************************ 00:21:24.563 07:30:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:24.563 07:30:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:24.563 07:30:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:24.563 07:30:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:24.563 07:30:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:24.563 07:30:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:28.758 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:28.758 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:28.758 Found net devices under 0000:86:00.0: cvl_0_0 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:28.758 Found net devices under 0000:86:00.1: cvl_0_1 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:28.758 07:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.759 07:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:28.759 ************************************ 00:21:28.759 START TEST nvmf_perf_adq 00:21:28.759 ************************************ 00:21:28.759 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:29.019 * Looking for test storage... 00:21:29.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:29.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.019 --rc genhtml_branch_coverage=1 00:21:29.019 --rc genhtml_function_coverage=1 00:21:29.019 --rc genhtml_legend=1 00:21:29.019 --rc geninfo_all_blocks=1 00:21:29.019 --rc geninfo_unexecuted_blocks=1 00:21:29.019 00:21:29.019 ' 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:29.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.019 --rc genhtml_branch_coverage=1 00:21:29.019 --rc genhtml_function_coverage=1 00:21:29.019 --rc genhtml_legend=1 00:21:29.019 --rc geninfo_all_blocks=1 00:21:29.019 --rc geninfo_unexecuted_blocks=1 00:21:29.019 00:21:29.019 ' 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:29.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.019 --rc genhtml_branch_coverage=1 00:21:29.019 --rc genhtml_function_coverage=1 00:21:29.019 --rc genhtml_legend=1 00:21:29.019 --rc geninfo_all_blocks=1 00:21:29.019 --rc geninfo_unexecuted_blocks=1 00:21:29.019 00:21:29.019 ' 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:29.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.019 --rc genhtml_branch_coverage=1 00:21:29.019 --rc genhtml_function_coverage=1 00:21:29.019 --rc genhtml_legend=1 00:21:29.019 --rc geninfo_all_blocks=1 00:21:29.019 --rc geninfo_unexecuted_blocks=1 00:21:29.019 00:21:29.019 ' 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.019 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.020 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.020 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.020 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:29.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:29.020 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.289 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:34.290 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:34.290 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:34.290 Found net devices under 0000:86:00.0: cvl_0_0 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:34.290 Found net devices under 0000:86:00.1: cvl_0_1 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:34.290 07:31:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:34.857 07:31:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:36.763 07:31:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:42.038 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:42.039 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:42.039 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:42.039 Found net devices under 0000:86:00.0: cvl_0_0 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:42.039 Found net devices under 0000:86:00.1: cvl_0_1 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:42.039 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.039 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.039 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.039 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:42.039 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:42.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:21:42.039 00:21:42.039 --- 10.0.0.2 ping statistics --- 00:21:42.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.039 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:21:42.039 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:21:42.039 00:21:42.039 --- 10.0.0.1 ping statistics --- 00:21:42.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.040 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=783907 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 783907 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 783907 ']' 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:42.040 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.040 [2024-11-26 07:31:10.129805] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:21:42.040 [2024-11-26 07:31:10.129852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.299 [2024-11-26 07:31:10.196592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.299 [2024-11-26 07:31:10.240903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.299 [2024-11-26 07:31:10.240941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.299 [2024-11-26 07:31:10.240953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.299 [2024-11-26 07:31:10.240960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.299 [2024-11-26 07:31:10.240981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.299 [2024-11-26 07:31:10.242593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.299 [2024-11-26 07:31:10.242693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.299 [2024-11-26 07:31:10.242786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.299 [2024-11-26 07:31:10.242788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.299 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.557 [2024-11-26 07:31:10.444448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.557 Malloc1 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.557 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:42.558 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.558 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.558 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.558 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.558 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.558 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.558 [2024-11-26 07:31:10.505055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.558 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.558 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=784142 00:21:42.558 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:42.558 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:44.460 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:44.460 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.460 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.460 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.460 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:44.460 "tick_rate": 2300000000, 00:21:44.460 "poll_groups": [ 00:21:44.460 { 00:21:44.460 "name": "nvmf_tgt_poll_group_000", 00:21:44.460 "admin_qpairs": 1, 00:21:44.460 "io_qpairs": 1, 00:21:44.460 "current_admin_qpairs": 1, 00:21:44.460 "current_io_qpairs": 1, 00:21:44.460 "pending_bdev_io": 0, 00:21:44.460 "completed_nvme_io": 20325, 00:21:44.460 "transports": [ 00:21:44.460 { 00:21:44.460 "trtype": "TCP" 00:21:44.460 } 00:21:44.460 ] 00:21:44.460 }, 00:21:44.460 { 00:21:44.460 "name": "nvmf_tgt_poll_group_001", 00:21:44.460 "admin_qpairs": 0, 00:21:44.460 "io_qpairs": 1, 00:21:44.460 "current_admin_qpairs": 0, 00:21:44.460 "current_io_qpairs": 1, 00:21:44.460 "pending_bdev_io": 0, 00:21:44.460 "completed_nvme_io": 20387, 00:21:44.460 "transports": [ 00:21:44.460 { 00:21:44.460 "trtype": "TCP" 00:21:44.460 } 00:21:44.460 ] 00:21:44.460 }, 00:21:44.460 { 00:21:44.460 "name": "nvmf_tgt_poll_group_002", 00:21:44.460 "admin_qpairs": 0, 00:21:44.460 "io_qpairs": 1, 00:21:44.460 "current_admin_qpairs": 0, 00:21:44.460 "current_io_qpairs": 1, 00:21:44.460 "pending_bdev_io": 0, 00:21:44.460 "completed_nvme_io": 20416, 00:21:44.460 "transports": [ 00:21:44.460 { 00:21:44.460 "trtype": "TCP" 00:21:44.460 } 00:21:44.460 ] 00:21:44.460 }, 00:21:44.460 { 00:21:44.460 "name": "nvmf_tgt_poll_group_003", 00:21:44.460 "admin_qpairs": 0, 00:21:44.460 "io_qpairs": 1, 00:21:44.460 "current_admin_qpairs": 0, 00:21:44.460 "current_io_qpairs": 1, 00:21:44.460 "pending_bdev_io": 0, 00:21:44.460 "completed_nvme_io": 20221, 00:21:44.460 "transports": [ 00:21:44.460 { 00:21:44.460 "trtype": "TCP" 00:21:44.460 } 00:21:44.460 ] 00:21:44.460 } 00:21:44.460 ] 00:21:44.460 }' 00:21:44.460 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:44.460 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:44.719 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:44.719 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:44.719 07:31:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 784142 00:21:52.850 Initializing NVMe Controllers 00:21:52.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:52.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:52.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:52.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:52.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:52.850 Initialization complete. Launching workers. 00:21:52.850 ======================================================== 00:21:52.850 Latency(us) 00:21:52.850 Device Information : IOPS MiB/s Average min max 00:21:52.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10473.90 40.91 6110.38 1312.90 10847.79 00:21:52.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10699.70 41.80 5982.55 1450.53 10604.94 00:21:52.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10605.40 41.43 6034.59 2042.34 10126.66 00:21:52.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10619.60 41.48 6026.12 1994.42 10938.28 00:21:52.850 ======================================================== 00:21:52.850 Total : 42398.59 165.62 6038.06 1312.90 10938.28 00:21:52.850 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.850 rmmod nvme_tcp 00:21:52.850 rmmod nvme_fabrics 00:21:52.850 rmmod nvme_keyring 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 783907 ']' 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 783907 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 783907 ']' 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 783907 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 783907 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 783907' 00:21:52.850 killing process with pid 783907 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 783907 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 783907 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:52.850 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.110 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.110 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:53.110 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.110 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.110 07:31:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.016 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:55.016 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:55.016 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:55.016 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:56.393 07:31:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:58.294 07:31:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:03.565 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.565 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:03.566 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:03.566 Found net devices under 0000:86:00.0: cvl_0_0 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:03.566 Found net devices under 0000:86:00.1: cvl_0_1 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:03.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:22:03.566 00:22:03.566 --- 10.0.0.2 ping statistics --- 00:22:03.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.566 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:22:03.566 00:22:03.566 --- 10.0.0.1 ping statistics --- 00:22:03.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.566 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:03.566 net.core.busy_poll = 1 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:03.566 net.core.busy_read = 1 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=787872 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 787872 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 787872 ']' 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.566 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.826 [2024-11-26 07:31:31.660407] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:22:03.826 [2024-11-26 07:31:31.660463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.826 [2024-11-26 07:31:31.727389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.826 [2024-11-26 07:31:31.771440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.826 [2024-11-26 07:31:31.771477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.826 [2024-11-26 07:31:31.771485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.826 [2024-11-26 07:31:31.771491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.826 [2024-11-26 07:31:31.771498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.826 [2024-11-26 07:31:31.773050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.826 [2024-11-26 07:31:31.773154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.826 [2024-11-26 07:31:31.773255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.826 [2024-11-26 07:31:31.773257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.826 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.086 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.086 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:04.086 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.086 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.086 [2024-11-26 07:31:31.985667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.086 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.086 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:04.086 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.086 07:31:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.086 Malloc1 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:04.086 [2024-11-26 07:31:32.045210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=787957 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:04.086 07:31:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:05.989 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:05.989 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.989 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.989 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.989 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:05.989 "tick_rate": 2300000000, 00:22:05.989 "poll_groups": [ 00:22:05.989 { 00:22:05.989 "name": "nvmf_tgt_poll_group_000", 00:22:05.989 "admin_qpairs": 1, 00:22:05.989 "io_qpairs": 2, 00:22:05.989 "current_admin_qpairs": 1, 00:22:05.989 "current_io_qpairs": 2, 00:22:05.989 "pending_bdev_io": 0, 00:22:05.989 "completed_nvme_io": 28171, 00:22:05.989 "transports": [ 00:22:05.989 { 00:22:05.989 "trtype": "TCP" 00:22:05.989 } 00:22:05.989 ] 00:22:05.989 }, 00:22:05.989 { 00:22:05.989 "name": "nvmf_tgt_poll_group_001", 00:22:05.989 "admin_qpairs": 0, 00:22:05.989 "io_qpairs": 2, 00:22:05.989 "current_admin_qpairs": 0, 00:22:05.989 "current_io_qpairs": 2, 00:22:05.989 "pending_bdev_io": 0, 00:22:05.989 "completed_nvme_io": 27870, 00:22:05.989 "transports": [ 00:22:05.989 { 00:22:05.989 "trtype": "TCP" 00:22:05.989 } 00:22:05.989 ] 00:22:05.989 }, 00:22:05.989 { 00:22:05.989 "name": "nvmf_tgt_poll_group_002", 00:22:05.989 "admin_qpairs": 0, 00:22:05.989 "io_qpairs": 0, 00:22:05.989 "current_admin_qpairs": 0, 00:22:05.989 "current_io_qpairs": 0, 00:22:05.989 "pending_bdev_io": 0, 00:22:05.989 "completed_nvme_io": 0, 00:22:05.989 "transports": [ 00:22:05.989 { 00:22:05.989 "trtype": "TCP" 00:22:05.989 } 00:22:05.989 ] 00:22:05.989 }, 00:22:05.989 { 00:22:05.989 "name": "nvmf_tgt_poll_group_003", 00:22:05.989 "admin_qpairs": 0, 00:22:05.989 "io_qpairs": 0, 00:22:05.989 "current_admin_qpairs": 0, 00:22:05.989 "current_io_qpairs": 0, 00:22:05.989 "pending_bdev_io": 0, 00:22:05.989 "completed_nvme_io": 0, 00:22:05.989 "transports": [ 00:22:05.989 { 00:22:05.989 "trtype": "TCP" 00:22:05.989 } 00:22:05.989 ] 00:22:05.989 } 00:22:05.989 ] 00:22:05.989 }' 00:22:05.989 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:05.989 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:06.248 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:06.248 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:06.248 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 787957 00:22:14.361 Initializing NVMe Controllers 00:22:14.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:14.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:14.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:14.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:14.361 Initialization complete. Launching workers. 00:22:14.361 ======================================================== 00:22:14.361 Latency(us) 00:22:14.361 Device Information : IOPS MiB/s Average min max 00:22:14.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7231.50 28.25 8882.54 1370.26 53546.37 00:22:14.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7755.70 30.30 8252.26 1433.24 54278.47 00:22:14.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8661.20 33.83 7388.32 1436.37 52941.51 00:22:14.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6082.80 23.76 10535.62 1507.29 54204.38 00:22:14.361 ======================================================== 00:22:14.361 Total : 29731.19 116.14 8621.04 1370.26 54278.47 00:22:14.361 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.361 rmmod nvme_tcp 00:22:14.361 rmmod nvme_fabrics 00:22:14.361 rmmod nvme_keyring 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 787872 ']' 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 787872 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 787872 ']' 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 787872 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787872 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787872' 00:22:14.361 killing process with pid 787872 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 787872 00:22:14.361 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 787872 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.620 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:17.911 00:22:17.911 real 0m48.810s 00:22:17.911 user 2m43.185s 00:22:17.911 sys 0m9.875s 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.911 ************************************ 00:22:17.911 END TEST nvmf_perf_adq 00:22:17.911 ************************************ 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:17.911 ************************************ 00:22:17.911 START TEST nvmf_shutdown 00:22:17.911 ************************************ 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:17.911 * Looking for test storage... 00:22:17.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.911 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:17.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.912 --rc genhtml_branch_coverage=1 00:22:17.912 --rc genhtml_function_coverage=1 00:22:17.912 --rc genhtml_legend=1 00:22:17.912 --rc geninfo_all_blocks=1 00:22:17.912 --rc geninfo_unexecuted_blocks=1 00:22:17.912 00:22:17.912 ' 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:17.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.912 --rc genhtml_branch_coverage=1 00:22:17.912 --rc genhtml_function_coverage=1 00:22:17.912 --rc genhtml_legend=1 00:22:17.912 --rc geninfo_all_blocks=1 00:22:17.912 --rc geninfo_unexecuted_blocks=1 00:22:17.912 00:22:17.912 ' 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:17.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.912 --rc genhtml_branch_coverage=1 00:22:17.912 --rc genhtml_function_coverage=1 00:22:17.912 --rc genhtml_legend=1 00:22:17.912 --rc geninfo_all_blocks=1 00:22:17.912 --rc geninfo_unexecuted_blocks=1 00:22:17.912 00:22:17.912 ' 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:17.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.912 --rc genhtml_branch_coverage=1 00:22:17.912 --rc genhtml_function_coverage=1 00:22:17.912 --rc genhtml_legend=1 00:22:17.912 --rc geninfo_all_blocks=1 00:22:17.912 --rc geninfo_unexecuted_blocks=1 00:22:17.912 00:22:17.912 ' 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:17.912 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:17.913 ************************************ 00:22:17.913 START TEST nvmf_shutdown_tc1 00:22:17.913 ************************************ 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.913 07:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:23.185 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:23.185 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.185 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:23.186 Found net devices under 0000:86:00.0: cvl_0_0 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:23.186 Found net devices under 0000:86:00.1: cvl_0_1 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.186 07:31:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:22:23.186 00:22:23.186 --- 10.0.0.2 ping statistics --- 00:22:23.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.186 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:22:23.186 00:22:23.186 --- 10.0.0.1 ping statistics --- 00:22:23.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.186 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=793250 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 793250 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 793250 ']' 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.186 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.445 [2024-11-26 07:31:51.298257] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:22:23.445 [2024-11-26 07:31:51.298303] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.445 [2024-11-26 07:31:51.362462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.445 [2024-11-26 07:31:51.406578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.445 [2024-11-26 07:31:51.406613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.445 [2024-11-26 07:31:51.406621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.445 [2024-11-26 07:31:51.406628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.445 [2024-11-26 07:31:51.406633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.445 [2024-11-26 07:31:51.408441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.445 [2024-11-26 07:31:51.408507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.445 [2024-11-26 07:31:51.408546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.445 [2024-11-26 07:31:51.408545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:23.445 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.445 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:23.445 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.445 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.445 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.704 [2024-11-26 07:31:51.552460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.704 07:31:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.704 Malloc1 00:22:23.704 [2024-11-26 07:31:51.664990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.704 Malloc2 00:22:23.704 Malloc3 00:22:23.704 Malloc4 00:22:23.963 Malloc5 00:22:23.963 Malloc6 00:22:23.963 Malloc7 00:22:23.963 Malloc8 00:22:23.963 Malloc9 00:22:23.963 Malloc10 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=793453 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 793453 /var/tmp/bdevperf.sock 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 793453 ']' 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.223 { 00:22:24.223 "params": { 00:22:24.223 "name": "Nvme$subsystem", 00:22:24.223 "trtype": "$TEST_TRANSPORT", 00:22:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.223 "adrfam": "ipv4", 00:22:24.223 "trsvcid": "$NVMF_PORT", 00:22:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.223 "hdgst": ${hdgst:-false}, 00:22:24.223 "ddgst": ${ddgst:-false} 00:22:24.223 }, 00:22:24.223 "method": "bdev_nvme_attach_controller" 00:22:24.223 } 00:22:24.223 EOF 00:22:24.223 )") 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.223 { 00:22:24.223 "params": { 00:22:24.223 "name": "Nvme$subsystem", 00:22:24.223 "trtype": "$TEST_TRANSPORT", 00:22:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.223 "adrfam": "ipv4", 00:22:24.223 "trsvcid": "$NVMF_PORT", 00:22:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.223 "hdgst": ${hdgst:-false}, 00:22:24.223 "ddgst": ${ddgst:-false} 00:22:24.223 }, 00:22:24.223 "method": "bdev_nvme_attach_controller" 00:22:24.223 } 00:22:24.223 EOF 00:22:24.223 )") 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.223 { 00:22:24.223 "params": { 00:22:24.223 "name": "Nvme$subsystem", 00:22:24.223 "trtype": "$TEST_TRANSPORT", 00:22:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.223 "adrfam": "ipv4", 00:22:24.223 "trsvcid": "$NVMF_PORT", 00:22:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.223 "hdgst": ${hdgst:-false}, 00:22:24.223 "ddgst": ${ddgst:-false} 00:22:24.223 }, 00:22:24.223 "method": "bdev_nvme_attach_controller" 00:22:24.223 } 00:22:24.223 EOF 00:22:24.223 )") 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.223 { 00:22:24.223 "params": { 00:22:24.223 "name": "Nvme$subsystem", 00:22:24.223 "trtype": "$TEST_TRANSPORT", 00:22:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.223 "adrfam": "ipv4", 00:22:24.223 "trsvcid": "$NVMF_PORT", 00:22:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.223 "hdgst": ${hdgst:-false}, 00:22:24.223 "ddgst": ${ddgst:-false} 00:22:24.223 }, 00:22:24.223 "method": "bdev_nvme_attach_controller" 00:22:24.223 } 00:22:24.223 EOF 00:22:24.223 )") 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.223 { 00:22:24.223 "params": { 00:22:24.223 "name": "Nvme$subsystem", 00:22:24.223 "trtype": "$TEST_TRANSPORT", 00:22:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.223 "adrfam": "ipv4", 00:22:24.223 "trsvcid": "$NVMF_PORT", 00:22:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.223 "hdgst": ${hdgst:-false}, 00:22:24.223 "ddgst": ${ddgst:-false} 00:22:24.223 }, 00:22:24.223 "method": "bdev_nvme_attach_controller" 00:22:24.223 } 00:22:24.223 EOF 00:22:24.223 )") 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.223 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.223 { 00:22:24.223 "params": { 00:22:24.224 "name": "Nvme$subsystem", 00:22:24.224 "trtype": "$TEST_TRANSPORT", 00:22:24.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "$NVMF_PORT", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.224 "hdgst": ${hdgst:-false}, 00:22:24.224 "ddgst": ${ddgst:-false} 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 } 00:22:24.224 EOF 00:22:24.224 )") 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.224 { 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme$subsystem", 00:22:24.224 "trtype": "$TEST_TRANSPORT", 00:22:24.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "$NVMF_PORT", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.224 "hdgst": ${hdgst:-false}, 00:22:24.224 "ddgst": ${ddgst:-false} 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 } 00:22:24.224 EOF 00:22:24.224 )") 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:24.224 [2024-11-26 07:31:52.144607] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:22:24.224 [2024-11-26 07:31:52.144655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.224 { 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme$subsystem", 00:22:24.224 "trtype": "$TEST_TRANSPORT", 00:22:24.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "$NVMF_PORT", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.224 "hdgst": ${hdgst:-false}, 00:22:24.224 "ddgst": ${ddgst:-false} 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 } 00:22:24.224 EOF 00:22:24.224 )") 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.224 { 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme$subsystem", 00:22:24.224 "trtype": "$TEST_TRANSPORT", 00:22:24.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "$NVMF_PORT", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.224 "hdgst": ${hdgst:-false}, 00:22:24.224 "ddgst": ${ddgst:-false} 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 } 00:22:24.224 EOF 00:22:24.224 )") 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.224 { 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme$subsystem", 00:22:24.224 "trtype": "$TEST_TRANSPORT", 00:22:24.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "$NVMF_PORT", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.224 "hdgst": ${hdgst:-false}, 00:22:24.224 "ddgst": ${ddgst:-false} 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 } 00:22:24.224 EOF 00:22:24.224 )") 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:24.224 07:31:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme1", 00:22:24.224 "trtype": "tcp", 00:22:24.224 "traddr": "10.0.0.2", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "4420", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.224 "hdgst": false, 00:22:24.224 "ddgst": false 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 },{ 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme2", 00:22:24.224 "trtype": "tcp", 00:22:24.224 "traddr": "10.0.0.2", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "4420", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:24.224 "hdgst": false, 00:22:24.224 "ddgst": false 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 },{ 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme3", 00:22:24.224 "trtype": "tcp", 00:22:24.224 "traddr": "10.0.0.2", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "4420", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:24.224 "hdgst": false, 00:22:24.224 "ddgst": false 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 },{ 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme4", 00:22:24.224 "trtype": "tcp", 00:22:24.224 "traddr": "10.0.0.2", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "4420", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:24.224 "hdgst": false, 00:22:24.224 "ddgst": false 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 },{ 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme5", 00:22:24.224 "trtype": "tcp", 00:22:24.224 "traddr": "10.0.0.2", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "4420", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:24.224 "hdgst": false, 00:22:24.224 "ddgst": false 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 },{ 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme6", 00:22:24.224 "trtype": "tcp", 00:22:24.224 "traddr": "10.0.0.2", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "4420", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:24.224 "hdgst": false, 00:22:24.224 "ddgst": false 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 },{ 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme7", 00:22:24.224 "trtype": "tcp", 00:22:24.224 "traddr": "10.0.0.2", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "4420", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:24.224 "hdgst": false, 00:22:24.224 "ddgst": false 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 },{ 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme8", 00:22:24.224 "trtype": "tcp", 00:22:24.224 "traddr": "10.0.0.2", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "4420", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:24.224 "hdgst": false, 00:22:24.224 "ddgst": false 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 },{ 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme9", 00:22:24.224 "trtype": "tcp", 00:22:24.224 "traddr": "10.0.0.2", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "4420", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:24.224 "hdgst": false, 00:22:24.224 "ddgst": false 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 },{ 00:22:24.224 "params": { 00:22:24.224 "name": "Nvme10", 00:22:24.224 "trtype": "tcp", 00:22:24.224 "traddr": "10.0.0.2", 00:22:24.224 "adrfam": "ipv4", 00:22:24.224 "trsvcid": "4420", 00:22:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:24.224 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:24.224 "hdgst": false, 00:22:24.224 "ddgst": false 00:22:24.224 }, 00:22:24.224 "method": "bdev_nvme_attach_controller" 00:22:24.224 }' 00:22:24.224 [2024-11-26 07:31:52.208331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.224 [2024-11-26 07:31:52.249675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.127 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.127 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:26.127 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:26.127 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.127 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:26.127 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.127 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 793453 00:22:26.127 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:26.127 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:27.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 793453 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 793250 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.064 { 00:22:27.064 "params": { 00:22:27.064 "name": "Nvme$subsystem", 00:22:27.064 "trtype": "$TEST_TRANSPORT", 00:22:27.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.064 "adrfam": "ipv4", 00:22:27.064 "trsvcid": "$NVMF_PORT", 00:22:27.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.064 "hdgst": ${hdgst:-false}, 00:22:27.064 "ddgst": ${ddgst:-false} 00:22:27.064 }, 00:22:27.064 "method": "bdev_nvme_attach_controller" 00:22:27.064 } 00:22:27.064 EOF 00:22:27.064 )") 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.064 { 00:22:27.064 "params": { 00:22:27.064 "name": "Nvme$subsystem", 00:22:27.064 "trtype": "$TEST_TRANSPORT", 00:22:27.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.064 "adrfam": "ipv4", 00:22:27.064 "trsvcid": "$NVMF_PORT", 00:22:27.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.064 "hdgst": ${hdgst:-false}, 00:22:27.064 "ddgst": ${ddgst:-false} 00:22:27.064 }, 00:22:27.064 "method": "bdev_nvme_attach_controller" 00:22:27.064 } 00:22:27.064 EOF 00:22:27.064 )") 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.064 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.064 { 00:22:27.064 "params": { 00:22:27.064 "name": "Nvme$subsystem", 00:22:27.064 "trtype": "$TEST_TRANSPORT", 00:22:27.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.064 "adrfam": "ipv4", 00:22:27.064 "trsvcid": "$NVMF_PORT", 00:22:27.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.064 "hdgst": ${hdgst:-false}, 00:22:27.064 "ddgst": ${ddgst:-false} 00:22:27.064 }, 00:22:27.064 "method": "bdev_nvme_attach_controller" 00:22:27.064 } 00:22:27.064 EOF 00:22:27.064 )") 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.065 { 00:22:27.065 "params": { 00:22:27.065 "name": "Nvme$subsystem", 00:22:27.065 "trtype": "$TEST_TRANSPORT", 00:22:27.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.065 "adrfam": "ipv4", 00:22:27.065 "trsvcid": "$NVMF_PORT", 00:22:27.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.065 "hdgst": ${hdgst:-false}, 00:22:27.065 "ddgst": ${ddgst:-false} 00:22:27.065 }, 00:22:27.065 "method": "bdev_nvme_attach_controller" 00:22:27.065 } 00:22:27.065 EOF 00:22:27.065 )") 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.065 { 00:22:27.065 "params": { 00:22:27.065 "name": "Nvme$subsystem", 00:22:27.065 "trtype": "$TEST_TRANSPORT", 00:22:27.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.065 "adrfam": "ipv4", 00:22:27.065 "trsvcid": "$NVMF_PORT", 00:22:27.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.065 "hdgst": ${hdgst:-false}, 00:22:27.065 "ddgst": ${ddgst:-false} 00:22:27.065 }, 00:22:27.065 "method": "bdev_nvme_attach_controller" 00:22:27.065 } 00:22:27.065 EOF 00:22:27.065 )") 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.065 { 00:22:27.065 "params": { 00:22:27.065 "name": "Nvme$subsystem", 00:22:27.065 "trtype": "$TEST_TRANSPORT", 00:22:27.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.065 "adrfam": "ipv4", 00:22:27.065 "trsvcid": "$NVMF_PORT", 00:22:27.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.065 "hdgst": ${hdgst:-false}, 00:22:27.065 "ddgst": ${ddgst:-false} 00:22:27.065 }, 00:22:27.065 "method": "bdev_nvme_attach_controller" 00:22:27.065 } 00:22:27.065 EOF 00:22:27.065 )") 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.065 { 00:22:27.065 "params": { 00:22:27.065 "name": "Nvme$subsystem", 00:22:27.065 "trtype": "$TEST_TRANSPORT", 00:22:27.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.065 "adrfam": "ipv4", 00:22:27.065 "trsvcid": "$NVMF_PORT", 00:22:27.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.065 "hdgst": ${hdgst:-false}, 00:22:27.065 "ddgst": ${ddgst:-false} 00:22:27.065 }, 00:22:27.065 "method": "bdev_nvme_attach_controller" 00:22:27.065 } 00:22:27.065 EOF 00:22:27.065 )") 00:22:27.065 [2024-11-26 07:31:55.074312] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:22:27.065 [2024-11-26 07:31:55.074363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793943 ] 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.065 { 00:22:27.065 "params": { 00:22:27.065 "name": "Nvme$subsystem", 00:22:27.065 "trtype": "$TEST_TRANSPORT", 00:22:27.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.065 "adrfam": "ipv4", 00:22:27.065 "trsvcid": "$NVMF_PORT", 00:22:27.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.065 "hdgst": ${hdgst:-false}, 00:22:27.065 "ddgst": ${ddgst:-false} 00:22:27.065 }, 00:22:27.065 "method": "bdev_nvme_attach_controller" 00:22:27.065 } 00:22:27.065 EOF 00:22:27.065 )") 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.065 { 00:22:27.065 "params": { 00:22:27.065 "name": "Nvme$subsystem", 00:22:27.065 "trtype": "$TEST_TRANSPORT", 00:22:27.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.065 "adrfam": "ipv4", 00:22:27.065 "trsvcid": "$NVMF_PORT", 00:22:27.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.065 "hdgst": ${hdgst:-false}, 00:22:27.065 "ddgst": ${ddgst:-false} 00:22:27.065 }, 00:22:27.065 "method": "bdev_nvme_attach_controller" 00:22:27.065 } 00:22:27.065 EOF 00:22:27.065 )") 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.065 { 00:22:27.065 "params": { 00:22:27.065 "name": "Nvme$subsystem", 00:22:27.065 "trtype": "$TEST_TRANSPORT", 00:22:27.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.065 "adrfam": "ipv4", 00:22:27.065 "trsvcid": "$NVMF_PORT", 00:22:27.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.065 "hdgst": ${hdgst:-false}, 00:22:27.065 "ddgst": ${ddgst:-false} 00:22:27.065 }, 00:22:27.065 "method": "bdev_nvme_attach_controller" 00:22:27.065 } 00:22:27.065 EOF 00:22:27.065 )") 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:27.065 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:27.065 "params": { 00:22:27.065 "name": "Nvme1", 00:22:27.066 "trtype": "tcp", 00:22:27.066 "traddr": "10.0.0.2", 00:22:27.066 "adrfam": "ipv4", 00:22:27.066 "trsvcid": "4420", 00:22:27.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.066 "hdgst": false, 00:22:27.066 "ddgst": false 00:22:27.066 }, 00:22:27.066 "method": "bdev_nvme_attach_controller" 00:22:27.066 },{ 00:22:27.066 "params": { 00:22:27.066 "name": "Nvme2", 00:22:27.066 "trtype": "tcp", 00:22:27.066 "traddr": "10.0.0.2", 00:22:27.066 "adrfam": "ipv4", 00:22:27.066 "trsvcid": "4420", 00:22:27.066 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:27.066 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.066 "hdgst": false, 00:22:27.066 "ddgst": false 00:22:27.066 }, 00:22:27.066 "method": "bdev_nvme_attach_controller" 00:22:27.066 },{ 00:22:27.066 "params": { 00:22:27.066 "name": "Nvme3", 00:22:27.066 "trtype": "tcp", 00:22:27.066 "traddr": "10.0.0.2", 00:22:27.066 "adrfam": "ipv4", 00:22:27.066 "trsvcid": "4420", 00:22:27.066 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:27.066 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:27.066 "hdgst": false, 00:22:27.066 "ddgst": false 00:22:27.066 }, 00:22:27.066 "method": "bdev_nvme_attach_controller" 00:22:27.066 },{ 00:22:27.066 "params": { 00:22:27.066 "name": "Nvme4", 00:22:27.066 "trtype": "tcp", 00:22:27.066 "traddr": "10.0.0.2", 00:22:27.066 "adrfam": "ipv4", 00:22:27.066 "trsvcid": "4420", 00:22:27.066 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:27.066 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:27.066 "hdgst": false, 00:22:27.066 "ddgst": false 00:22:27.066 }, 00:22:27.066 "method": "bdev_nvme_attach_controller" 00:22:27.066 },{ 00:22:27.066 "params": { 00:22:27.066 "name": "Nvme5", 00:22:27.066 "trtype": "tcp", 00:22:27.066 "traddr": "10.0.0.2", 00:22:27.066 "adrfam": "ipv4", 00:22:27.066 "trsvcid": "4420", 00:22:27.066 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:27.066 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:27.066 "hdgst": false, 00:22:27.066 "ddgst": false 00:22:27.066 }, 00:22:27.066 "method": "bdev_nvme_attach_controller" 00:22:27.066 },{ 00:22:27.066 "params": { 00:22:27.066 "name": "Nvme6", 00:22:27.066 "trtype": "tcp", 00:22:27.066 "traddr": "10.0.0.2", 00:22:27.066 "adrfam": "ipv4", 00:22:27.066 "trsvcid": "4420", 00:22:27.066 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:27.066 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:27.066 "hdgst": false, 00:22:27.066 "ddgst": false 00:22:27.066 }, 00:22:27.066 "method": "bdev_nvme_attach_controller" 00:22:27.066 },{ 00:22:27.066 "params": { 00:22:27.066 "name": "Nvme7", 00:22:27.066 "trtype": "tcp", 00:22:27.066 "traddr": "10.0.0.2", 00:22:27.066 "adrfam": "ipv4", 00:22:27.066 "trsvcid": "4420", 00:22:27.066 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:27.066 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:27.066 "hdgst": false, 00:22:27.066 "ddgst": false 00:22:27.066 }, 00:22:27.066 "method": "bdev_nvme_attach_controller" 00:22:27.066 },{ 00:22:27.066 "params": { 00:22:27.066 "name": "Nvme8", 00:22:27.066 "trtype": "tcp", 00:22:27.066 "traddr": "10.0.0.2", 00:22:27.066 "adrfam": "ipv4", 00:22:27.066 "trsvcid": "4420", 00:22:27.066 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:27.066 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:27.066 "hdgst": false, 00:22:27.066 "ddgst": false 00:22:27.066 }, 00:22:27.066 "method": "bdev_nvme_attach_controller" 00:22:27.066 },{ 00:22:27.066 "params": { 00:22:27.066 "name": "Nvme9", 00:22:27.066 "trtype": "tcp", 00:22:27.066 "traddr": "10.0.0.2", 00:22:27.066 "adrfam": "ipv4", 00:22:27.066 "trsvcid": "4420", 00:22:27.066 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:27.066 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:27.066 "hdgst": false, 00:22:27.066 "ddgst": false 00:22:27.066 }, 00:22:27.066 "method": "bdev_nvme_attach_controller" 00:22:27.066 },{ 00:22:27.066 "params": { 00:22:27.066 "name": "Nvme10", 00:22:27.066 "trtype": "tcp", 00:22:27.066 "traddr": "10.0.0.2", 00:22:27.066 "adrfam": "ipv4", 00:22:27.066 "trsvcid": "4420", 00:22:27.066 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:27.066 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:27.066 "hdgst": false, 00:22:27.066 "ddgst": false 00:22:27.066 }, 00:22:27.066 "method": "bdev_nvme_attach_controller" 00:22:27.066 }' 00:22:27.066 [2024-11-26 07:31:55.138789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.326 [2024-11-26 07:31:55.180958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.704 Running I/O for 1 seconds... 00:22:29.642 2195.00 IOPS, 137.19 MiB/s 00:22:29.642 Latency(us) 00:22:29.642 [2024-11-26T06:31:57.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.642 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.642 Verification LBA range: start 0x0 length 0x400 00:22:29.643 Nvme1n1 : 1.15 281.57 17.60 0.00 0.00 224397.72 15614.66 205156.17 00:22:29.643 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.643 Verification LBA range: start 0x0 length 0x400 00:22:29.643 Nvme2n1 : 1.17 273.86 17.12 0.00 0.00 228420.74 18008.15 223392.28 00:22:29.643 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.643 Verification LBA range: start 0x0 length 0x400 00:22:29.643 Nvme3n1 : 1.15 281.71 17.61 0.00 0.00 218071.39 10314.80 220656.86 00:22:29.643 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.643 Verification LBA range: start 0x0 length 0x400 00:22:29.643 Nvme4n1 : 1.16 280.52 17.53 0.00 0.00 216286.54 3234.06 224304.08 00:22:29.643 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.643 Verification LBA range: start 0x0 length 0x400 00:22:29.643 Nvme5n1 : 1.14 224.32 14.02 0.00 0.00 266827.46 18692.01 228863.11 00:22:29.643 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.643 Verification LBA range: start 0x0 length 0x400 00:22:29.643 Nvme6n1 : 1.18 271.58 16.97 0.00 0.00 217642.38 19717.79 233422.14 00:22:29.643 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.643 Verification LBA range: start 0x0 length 0x400 00:22:29.643 Nvme7n1 : 1.17 272.98 17.06 0.00 0.00 213184.33 15158.76 217921.45 00:22:29.643 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.643 Verification LBA range: start 0x0 length 0x400 00:22:29.643 Nvme8n1 : 1.18 272.19 17.01 0.00 0.00 210800.64 17324.30 230686.72 00:22:29.643 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.643 Verification LBA range: start 0x0 length 0x400 00:22:29.643 Nvme9n1 : 1.18 270.88 16.93 0.00 0.00 208660.57 17666.23 229774.91 00:22:29.643 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.643 Verification LBA range: start 0x0 length 0x400 00:22:29.643 Nvme10n1 : 1.18 273.67 17.10 0.00 0.00 203516.87 1260.86 238892.97 00:22:29.643 [2024-11-26T06:31:57.743Z] =================================================================================================================== 00:22:29.643 [2024-11-26T06:31:57.743Z] Total : 2703.30 168.96 0.00 0.00 219818.66 1260.86 238892.97 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.903 rmmod nvme_tcp 00:22:29.903 rmmod nvme_fabrics 00:22:29.903 rmmod nvme_keyring 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 793250 ']' 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 793250 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 793250 ']' 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 793250 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 793250 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:29.903 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:29.904 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 793250' 00:22:29.904 killing process with pid 793250 00:22:29.904 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 793250 00:22:29.904 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 793250 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.473 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.379 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:32.379 00:22:32.379 real 0m14.497s 00:22:32.379 user 0m33.165s 00:22:32.379 sys 0m5.478s 00:22:32.379 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.379 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:32.379 ************************************ 00:22:32.379 END TEST nvmf_shutdown_tc1 00:22:32.379 ************************************ 00:22:32.379 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:32.379 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:32.379 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.379 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:32.638 ************************************ 00:22:32.638 START TEST nvmf_shutdown_tc2 00:22:32.638 ************************************ 00:22:32.638 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:32.638 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:32.638 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:32.638 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:32.638 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.638 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:32.638 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:32.638 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:32.638 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.638 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.638 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:32.639 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:32.639 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:32.639 Found net devices under 0000:86:00.0: cvl_0_0 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:32.639 Found net devices under 0000:86:00.1: cvl_0_1 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.639 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.640 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.640 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.640 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.640 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.640 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.640 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:22:32.900 00:22:32.900 --- 10.0.0.2 ping statistics --- 00:22:32.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.900 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:22:32.900 00:22:32.900 --- 10.0.0.1 ping statistics --- 00:22:32.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.900 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=794974 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 794974 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 794974 ']' 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.900 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:32.900 [2024-11-26 07:32:00.835327] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:22:32.900 [2024-11-26 07:32:00.835371] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.900 [2024-11-26 07:32:00.902096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.900 [2024-11-26 07:32:00.944260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.900 [2024-11-26 07:32:00.944299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.900 [2024-11-26 07:32:00.944306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.900 [2024-11-26 07:32:00.944311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.900 [2024-11-26 07:32:00.944317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.900 [2024-11-26 07:32:00.945772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.900 [2024-11-26 07:32:00.945856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.900 [2024-11-26 07:32:00.946016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.900 [2024-11-26 07:32:00.946016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.192 [2024-11-26 07:32:01.087420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.192 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.192 Malloc1 00:22:33.192 [2024-11-26 07:32:01.197384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.192 Malloc2 00:22:33.192 Malloc3 00:22:33.498 Malloc4 00:22:33.498 Malloc5 00:22:33.498 Malloc6 00:22:33.498 Malloc7 00:22:33.498 Malloc8 00:22:33.498 Malloc9 00:22:33.498 Malloc10 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=795247 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 795247 /var/tmp/bdevperf.sock 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 795247 ']' 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.827 { 00:22:33.827 "params": { 00:22:33.827 "name": "Nvme$subsystem", 00:22:33.827 "trtype": "$TEST_TRANSPORT", 00:22:33.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.827 "adrfam": "ipv4", 00:22:33.827 "trsvcid": "$NVMF_PORT", 00:22:33.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.827 "hdgst": ${hdgst:-false}, 00:22:33.827 "ddgst": ${ddgst:-false} 00:22:33.827 }, 00:22:33.827 "method": "bdev_nvme_attach_controller" 00:22:33.827 } 00:22:33.827 EOF 00:22:33.827 )") 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.827 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.827 { 00:22:33.827 "params": { 00:22:33.827 "name": "Nvme$subsystem", 00:22:33.827 "trtype": "$TEST_TRANSPORT", 00:22:33.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.827 "adrfam": "ipv4", 00:22:33.827 "trsvcid": "$NVMF_PORT", 00:22:33.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.828 "hdgst": ${hdgst:-false}, 00:22:33.828 "ddgst": ${ddgst:-false} 00:22:33.828 }, 00:22:33.828 "method": "bdev_nvme_attach_controller" 00:22:33.828 } 00:22:33.828 EOF 00:22:33.828 )") 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.828 { 00:22:33.828 "params": { 00:22:33.828 "name": "Nvme$subsystem", 00:22:33.828 "trtype": "$TEST_TRANSPORT", 00:22:33.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.828 "adrfam": "ipv4", 00:22:33.828 "trsvcid": "$NVMF_PORT", 00:22:33.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.828 "hdgst": ${hdgst:-false}, 00:22:33.828 "ddgst": ${ddgst:-false} 00:22:33.828 }, 00:22:33.828 "method": "bdev_nvme_attach_controller" 00:22:33.828 } 00:22:33.828 EOF 00:22:33.828 )") 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.828 { 00:22:33.828 "params": { 00:22:33.828 "name": "Nvme$subsystem", 00:22:33.828 "trtype": "$TEST_TRANSPORT", 00:22:33.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.828 "adrfam": "ipv4", 00:22:33.828 "trsvcid": "$NVMF_PORT", 00:22:33.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.828 "hdgst": ${hdgst:-false}, 00:22:33.828 "ddgst": ${ddgst:-false} 00:22:33.828 }, 00:22:33.828 "method": "bdev_nvme_attach_controller" 00:22:33.828 } 00:22:33.828 EOF 00:22:33.828 )") 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.828 { 00:22:33.828 "params": { 00:22:33.828 "name": "Nvme$subsystem", 00:22:33.828 "trtype": "$TEST_TRANSPORT", 00:22:33.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.828 "adrfam": "ipv4", 00:22:33.828 "trsvcid": "$NVMF_PORT", 00:22:33.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.828 "hdgst": ${hdgst:-false}, 00:22:33.828 "ddgst": ${ddgst:-false} 00:22:33.828 }, 00:22:33.828 "method": "bdev_nvme_attach_controller" 00:22:33.828 } 00:22:33.828 EOF 00:22:33.828 )") 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.828 { 00:22:33.828 "params": { 00:22:33.828 "name": "Nvme$subsystem", 00:22:33.828 "trtype": "$TEST_TRANSPORT", 00:22:33.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.828 "adrfam": "ipv4", 00:22:33.828 "trsvcid": "$NVMF_PORT", 00:22:33.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.828 "hdgst": ${hdgst:-false}, 00:22:33.828 "ddgst": ${ddgst:-false} 00:22:33.828 }, 00:22:33.828 "method": "bdev_nvme_attach_controller" 00:22:33.828 } 00:22:33.828 EOF 00:22:33.828 )") 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.828 { 00:22:33.828 "params": { 00:22:33.828 "name": "Nvme$subsystem", 00:22:33.828 "trtype": "$TEST_TRANSPORT", 00:22:33.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.828 "adrfam": "ipv4", 00:22:33.828 "trsvcid": "$NVMF_PORT", 00:22:33.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.828 "hdgst": ${hdgst:-false}, 00:22:33.828 "ddgst": ${ddgst:-false} 00:22:33.828 }, 00:22:33.828 "method": "bdev_nvme_attach_controller" 00:22:33.828 } 00:22:33.828 EOF 00:22:33.828 )") 00:22:33.828 [2024-11-26 07:32:01.670974] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:22:33.828 [2024-11-26 07:32:01.671022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795247 ] 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.828 { 00:22:33.828 "params": { 00:22:33.828 "name": "Nvme$subsystem", 00:22:33.828 "trtype": "$TEST_TRANSPORT", 00:22:33.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.828 "adrfam": "ipv4", 00:22:33.828 "trsvcid": "$NVMF_PORT", 00:22:33.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.828 "hdgst": ${hdgst:-false}, 00:22:33.828 "ddgst": ${ddgst:-false} 00:22:33.828 }, 00:22:33.828 "method": "bdev_nvme_attach_controller" 00:22:33.828 } 00:22:33.828 EOF 00:22:33.828 )") 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.828 { 00:22:33.828 "params": { 00:22:33.828 "name": "Nvme$subsystem", 00:22:33.828 "trtype": "$TEST_TRANSPORT", 00:22:33.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.828 "adrfam": "ipv4", 00:22:33.828 "trsvcid": "$NVMF_PORT", 00:22:33.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.828 "hdgst": ${hdgst:-false}, 00:22:33.828 "ddgst": ${ddgst:-false} 00:22:33.828 }, 00:22:33.828 "method": "bdev_nvme_attach_controller" 00:22:33.828 } 00:22:33.828 EOF 00:22:33.828 )") 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.828 { 00:22:33.828 "params": { 00:22:33.828 "name": "Nvme$subsystem", 00:22:33.828 "trtype": "$TEST_TRANSPORT", 00:22:33.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.828 "adrfam": "ipv4", 00:22:33.828 "trsvcid": "$NVMF_PORT", 00:22:33.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.828 "hdgst": ${hdgst:-false}, 00:22:33.828 "ddgst": ${ddgst:-false} 00:22:33.828 }, 00:22:33.828 "method": "bdev_nvme_attach_controller" 00:22:33.828 } 00:22:33.828 EOF 00:22:33.828 )") 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:33.828 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:33.828 "params": { 00:22:33.828 "name": "Nvme1", 00:22:33.828 "trtype": "tcp", 00:22:33.828 "traddr": "10.0.0.2", 00:22:33.828 "adrfam": "ipv4", 00:22:33.828 "trsvcid": "4420", 00:22:33.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.828 "hdgst": false, 00:22:33.828 "ddgst": false 00:22:33.828 }, 00:22:33.828 "method": "bdev_nvme_attach_controller" 00:22:33.828 },{ 00:22:33.828 "params": { 00:22:33.828 "name": "Nvme2", 00:22:33.828 "trtype": "tcp", 00:22:33.828 "traddr": "10.0.0.2", 00:22:33.828 "adrfam": "ipv4", 00:22:33.828 "trsvcid": "4420", 00:22:33.828 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:33.828 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:33.828 "hdgst": false, 00:22:33.828 "ddgst": false 00:22:33.828 }, 00:22:33.828 "method": "bdev_nvme_attach_controller" 00:22:33.828 },{ 00:22:33.828 "params": { 00:22:33.828 "name": "Nvme3", 00:22:33.828 "trtype": "tcp", 00:22:33.828 "traddr": "10.0.0.2", 00:22:33.828 "adrfam": "ipv4", 00:22:33.828 "trsvcid": "4420", 00:22:33.828 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:33.828 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:33.828 "hdgst": false, 00:22:33.828 "ddgst": false 00:22:33.829 }, 00:22:33.829 "method": "bdev_nvme_attach_controller" 00:22:33.829 },{ 00:22:33.829 "params": { 00:22:33.829 "name": "Nvme4", 00:22:33.829 "trtype": "tcp", 00:22:33.829 "traddr": "10.0.0.2", 00:22:33.829 "adrfam": "ipv4", 00:22:33.829 "trsvcid": "4420", 00:22:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:33.829 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:33.829 "hdgst": false, 00:22:33.829 "ddgst": false 00:22:33.829 }, 00:22:33.829 "method": "bdev_nvme_attach_controller" 00:22:33.829 },{ 00:22:33.829 "params": { 00:22:33.829 "name": "Nvme5", 00:22:33.829 "trtype": "tcp", 00:22:33.829 "traddr": "10.0.0.2", 00:22:33.829 "adrfam": "ipv4", 00:22:33.829 "trsvcid": "4420", 00:22:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:33.829 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:33.829 "hdgst": false, 00:22:33.829 "ddgst": false 00:22:33.829 }, 00:22:33.829 "method": "bdev_nvme_attach_controller" 00:22:33.829 },{ 00:22:33.829 "params": { 00:22:33.829 "name": "Nvme6", 00:22:33.829 "trtype": "tcp", 00:22:33.829 "traddr": "10.0.0.2", 00:22:33.829 "adrfam": "ipv4", 00:22:33.829 "trsvcid": "4420", 00:22:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:33.829 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:33.829 "hdgst": false, 00:22:33.829 "ddgst": false 00:22:33.829 }, 00:22:33.829 "method": "bdev_nvme_attach_controller" 00:22:33.829 },{ 00:22:33.829 "params": { 00:22:33.829 "name": "Nvme7", 00:22:33.829 "trtype": "tcp", 00:22:33.829 "traddr": "10.0.0.2", 00:22:33.829 "adrfam": "ipv4", 00:22:33.829 "trsvcid": "4420", 00:22:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:33.829 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:33.829 "hdgst": false, 00:22:33.829 "ddgst": false 00:22:33.829 }, 00:22:33.829 "method": "bdev_nvme_attach_controller" 00:22:33.829 },{ 00:22:33.829 "params": { 00:22:33.829 "name": "Nvme8", 00:22:33.829 "trtype": "tcp", 00:22:33.829 "traddr": "10.0.0.2", 00:22:33.829 "adrfam": "ipv4", 00:22:33.829 "trsvcid": "4420", 00:22:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:33.829 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:33.829 "hdgst": false, 00:22:33.829 "ddgst": false 00:22:33.829 }, 00:22:33.829 "method": "bdev_nvme_attach_controller" 00:22:33.829 },{ 00:22:33.829 "params": { 00:22:33.829 "name": "Nvme9", 00:22:33.829 "trtype": "tcp", 00:22:33.829 "traddr": "10.0.0.2", 00:22:33.829 "adrfam": "ipv4", 00:22:33.829 "trsvcid": "4420", 00:22:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:33.829 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:33.829 "hdgst": false, 00:22:33.829 "ddgst": false 00:22:33.829 }, 00:22:33.829 "method": "bdev_nvme_attach_controller" 00:22:33.829 },{ 00:22:33.829 "params": { 00:22:33.829 "name": "Nvme10", 00:22:33.829 "trtype": "tcp", 00:22:33.829 "traddr": "10.0.0.2", 00:22:33.829 "adrfam": "ipv4", 00:22:33.829 "trsvcid": "4420", 00:22:33.829 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:33.829 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:33.829 "hdgst": false, 00:22:33.829 "ddgst": false 00:22:33.829 }, 00:22:33.829 "method": "bdev_nvme_attach_controller" 00:22:33.829 }' 00:22:33.829 [2024-11-26 07:32:01.734639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.829 [2024-11-26 07:32:01.776095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.356 Running I/O for 10 seconds... 00:22:35.663 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.663 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:35.663 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:35.663 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.663 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.663 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:35.664 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 795247 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 795247 ']' 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 795247 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 795247 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 795247' 00:22:35.939 killing process with pid 795247 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 795247 00:22:35.939 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 795247 00:22:36.198 Received shutdown signal, test time was about 0.844819 seconds 00:22:36.198 00:22:36.198 Latency(us) 00:22:36.198 [2024-11-26T06:32:04.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.198 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.198 Verification LBA range: start 0x0 length 0x400 00:22:36.198 Nvme1n1 : 0.84 304.15 19.01 0.00 0.00 207862.87 17096.35 223392.28 00:22:36.198 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.198 Verification LBA range: start 0x0 length 0x400 00:22:36.198 Nvme2n1 : 0.84 314.81 19.68 0.00 0.00 195771.75 4957.94 216097.84 00:22:36.198 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.198 Verification LBA range: start 0x0 length 0x400 00:22:36.198 Nvme3n1 : 0.83 307.47 19.22 0.00 0.00 197635.12 24390.79 207891.59 00:22:36.198 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.198 Verification LBA range: start 0x0 length 0x400 00:22:36.198 Nvme4n1 : 0.84 310.64 19.42 0.00 0.00 191168.76 1780.87 208803.39 00:22:36.198 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.198 Verification LBA range: start 0x0 length 0x400 00:22:36.198 Nvme5n1 : 0.83 231.87 14.49 0.00 0.00 251516.66 19603.81 235245.75 00:22:36.198 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.198 Verification LBA range: start 0x0 length 0x400 00:22:36.198 Nvme6n1 : 0.82 234.92 14.68 0.00 0.00 242691.26 16412.49 220656.86 00:22:36.198 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.198 Verification LBA range: start 0x0 length 0x400 00:22:36.198 Nvme7n1 : 0.84 308.00 19.25 0.00 0.00 181612.45 1488.81 219745.06 00:22:36.198 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.198 Verification LBA range: start 0x0 length 0x400 00:22:36.198 Nvme8n1 : 0.81 237.79 14.86 0.00 0.00 228839.96 15500.69 224304.08 00:22:36.198 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.198 Verification LBA range: start 0x0 length 0x400 00:22:36.198 Nvme9n1 : 0.82 234.36 14.65 0.00 0.00 227460.67 19717.79 223392.28 00:22:36.198 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:36.198 Verification LBA range: start 0x0 length 0x400 00:22:36.198 Nvme10n1 : 0.83 232.66 14.54 0.00 0.00 223996.88 19261.89 238892.97 00:22:36.198 [2024-11-26T06:32:04.298Z] =================================================================================================================== 00:22:36.198 [2024-11-26T06:32:04.298Z] Total : 2716.67 169.79 0.00 0.00 211843.70 1488.81 238892.97 00:22:36.198 07:32:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:37.134 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 794974 00:22:37.134 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:37.134 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:37.134 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:37.134 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.392 rmmod nvme_tcp 00:22:37.392 rmmod nvme_fabrics 00:22:37.392 rmmod nvme_keyring 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 794974 ']' 00:22:37.392 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 794974 00:22:37.393 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 794974 ']' 00:22:37.393 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 794974 00:22:37.393 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:37.393 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.393 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 794974 00:22:37.393 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:37.393 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:37.393 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 794974' 00:22:37.393 killing process with pid 794974 00:22:37.393 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 794974 00:22:37.393 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 794974 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.652 07:32:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.188 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:40.188 00:22:40.188 real 0m7.298s 00:22:40.188 user 0m21.483s 00:22:40.188 sys 0m1.290s 00:22:40.188 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.188 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:40.188 ************************************ 00:22:40.188 END TEST nvmf_shutdown_tc2 00:22:40.188 ************************************ 00:22:40.188 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:40.188 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:40.189 ************************************ 00:22:40.189 START TEST nvmf_shutdown_tc3 00:22:40.189 ************************************ 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:40.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:40.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:40.189 Found net devices under 0000:86:00.0: cvl_0_0 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:40.189 Found net devices under 0000:86:00.1: cvl_0_1 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.189 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.190 07:32:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:22:40.190 00:22:40.190 --- 10.0.0.2 ping statistics --- 00:22:40.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.190 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:22:40.190 00:22:40.190 --- 10.0.0.1 ping statistics --- 00:22:40.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.190 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=796364 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 796364 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 796364 ']' 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.190 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.190 [2024-11-26 07:32:08.154801] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:22:40.190 [2024-11-26 07:32:08.154843] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.190 [2024-11-26 07:32:08.216251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.190 [2024-11-26 07:32:08.261740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.190 [2024-11-26 07:32:08.261774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.190 [2024-11-26 07:32:08.261780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.190 [2024-11-26 07:32:08.261787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.190 [2024-11-26 07:32:08.261793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.190 [2024-11-26 07:32:08.263442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.190 [2024-11-26 07:32:08.263461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.190 [2024-11-26 07:32:08.263597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.190 [2024-11-26 07:32:08.263597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.450 [2024-11-26 07:32:08.407487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.450 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.450 Malloc1 00:22:40.450 [2024-11-26 07:32:08.514486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.450 Malloc2 00:22:40.709 Malloc3 00:22:40.709 Malloc4 00:22:40.709 Malloc5 00:22:40.709 Malloc6 00:22:40.709 Malloc7 00:22:40.709 Malloc8 00:22:40.969 Malloc9 00:22:40.969 Malloc10 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=796602 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 796602 /var/tmp/bdevperf.sock 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 796602 ']' 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.969 { 00:22:40.969 "params": { 00:22:40.969 "name": "Nvme$subsystem", 00:22:40.969 "trtype": "$TEST_TRANSPORT", 00:22:40.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.969 "adrfam": "ipv4", 00:22:40.969 "trsvcid": "$NVMF_PORT", 00:22:40.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.969 "hdgst": ${hdgst:-false}, 00:22:40.969 "ddgst": ${ddgst:-false} 00:22:40.969 }, 00:22:40.969 "method": "bdev_nvme_attach_controller" 00:22:40.969 } 00:22:40.969 EOF 00:22:40.969 )") 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.969 { 00:22:40.969 "params": { 00:22:40.969 "name": "Nvme$subsystem", 00:22:40.969 "trtype": "$TEST_TRANSPORT", 00:22:40.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.969 "adrfam": "ipv4", 00:22:40.969 "trsvcid": "$NVMF_PORT", 00:22:40.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.969 "hdgst": ${hdgst:-false}, 00:22:40.969 "ddgst": ${ddgst:-false} 00:22:40.969 }, 00:22:40.969 "method": "bdev_nvme_attach_controller" 00:22:40.969 } 00:22:40.969 EOF 00:22:40.969 )") 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.969 { 00:22:40.969 "params": { 00:22:40.969 "name": "Nvme$subsystem", 00:22:40.969 "trtype": "$TEST_TRANSPORT", 00:22:40.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.969 "adrfam": "ipv4", 00:22:40.969 "trsvcid": "$NVMF_PORT", 00:22:40.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.969 "hdgst": ${hdgst:-false}, 00:22:40.969 "ddgst": ${ddgst:-false} 00:22:40.969 }, 00:22:40.969 "method": "bdev_nvme_attach_controller" 00:22:40.969 } 00:22:40.969 EOF 00:22:40.969 )") 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.969 { 00:22:40.969 "params": { 00:22:40.969 "name": "Nvme$subsystem", 00:22:40.969 "trtype": "$TEST_TRANSPORT", 00:22:40.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.969 "adrfam": "ipv4", 00:22:40.969 "trsvcid": "$NVMF_PORT", 00:22:40.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.969 "hdgst": ${hdgst:-false}, 00:22:40.969 "ddgst": ${ddgst:-false} 00:22:40.969 }, 00:22:40.969 "method": "bdev_nvme_attach_controller" 00:22:40.969 } 00:22:40.969 EOF 00:22:40.969 )") 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.969 { 00:22:40.969 "params": { 00:22:40.969 "name": "Nvme$subsystem", 00:22:40.969 "trtype": "$TEST_TRANSPORT", 00:22:40.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.969 "adrfam": "ipv4", 00:22:40.969 "trsvcid": "$NVMF_PORT", 00:22:40.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.969 "hdgst": ${hdgst:-false}, 00:22:40.969 "ddgst": ${ddgst:-false} 00:22:40.969 }, 00:22:40.969 "method": "bdev_nvme_attach_controller" 00:22:40.969 } 00:22:40.969 EOF 00:22:40.969 )") 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.969 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.969 { 00:22:40.969 "params": { 00:22:40.969 "name": "Nvme$subsystem", 00:22:40.969 "trtype": "$TEST_TRANSPORT", 00:22:40.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.969 "adrfam": "ipv4", 00:22:40.969 "trsvcid": "$NVMF_PORT", 00:22:40.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.970 "hdgst": ${hdgst:-false}, 00:22:40.970 "ddgst": ${ddgst:-false} 00:22:40.970 }, 00:22:40.970 "method": "bdev_nvme_attach_controller" 00:22:40.970 } 00:22:40.970 EOF 00:22:40.970 )") 00:22:40.970 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:40.970 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.970 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.970 { 00:22:40.970 "params": { 00:22:40.970 "name": "Nvme$subsystem", 00:22:40.970 "trtype": "$TEST_TRANSPORT", 00:22:40.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.970 "adrfam": "ipv4", 00:22:40.970 "trsvcid": "$NVMF_PORT", 00:22:40.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.970 "hdgst": ${hdgst:-false}, 00:22:40.970 "ddgst": ${ddgst:-false} 00:22:40.970 }, 00:22:40.970 "method": "bdev_nvme_attach_controller" 00:22:40.970 } 00:22:40.970 EOF 00:22:40.970 )") 00:22:40.970 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:40.970 [2024-11-26 07:32:08.989185] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:22:40.970 [2024-11-26 07:32:08.989231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796602 ] 00:22:40.970 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.970 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.970 { 00:22:40.970 "params": { 00:22:40.970 "name": "Nvme$subsystem", 00:22:40.970 "trtype": "$TEST_TRANSPORT", 00:22:40.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.970 "adrfam": "ipv4", 00:22:40.970 "trsvcid": "$NVMF_PORT", 00:22:40.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.970 "hdgst": ${hdgst:-false}, 00:22:40.970 "ddgst": ${ddgst:-false} 00:22:40.970 }, 00:22:40.970 "method": "bdev_nvme_attach_controller" 00:22:40.970 } 00:22:40.970 EOF 00:22:40.970 )") 00:22:40.970 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:40.970 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.970 07:32:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.970 { 00:22:40.970 "params": { 00:22:40.970 "name": "Nvme$subsystem", 00:22:40.970 "trtype": "$TEST_TRANSPORT", 00:22:40.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.970 "adrfam": "ipv4", 00:22:40.970 "trsvcid": "$NVMF_PORT", 00:22:40.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.970 "hdgst": ${hdgst:-false}, 00:22:40.970 "ddgst": ${ddgst:-false} 00:22:40.970 }, 00:22:40.970 "method": "bdev_nvme_attach_controller" 00:22:40.970 } 00:22:40.970 EOF 00:22:40.970 )") 00:22:40.970 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:40.970 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.970 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.970 { 00:22:40.970 "params": { 00:22:40.970 "name": "Nvme$subsystem", 00:22:40.970 "trtype": "$TEST_TRANSPORT", 00:22:40.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.970 "adrfam": "ipv4", 00:22:40.970 "trsvcid": "$NVMF_PORT", 00:22:40.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.970 "hdgst": ${hdgst:-false}, 00:22:40.970 "ddgst": ${ddgst:-false} 00:22:40.970 }, 00:22:40.970 "method": "bdev_nvme_attach_controller" 00:22:40.970 } 00:22:40.970 EOF 00:22:40.970 )") 00:22:40.970 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:40.970 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:40.970 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:40.970 07:32:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:40.970 "params": { 00:22:40.970 "name": "Nvme1", 00:22:40.970 "trtype": "tcp", 00:22:40.970 "traddr": "10.0.0.2", 00:22:40.970 "adrfam": "ipv4", 00:22:40.970 "trsvcid": "4420", 00:22:40.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.970 "hdgst": false, 00:22:40.970 "ddgst": false 00:22:40.970 }, 00:22:40.970 "method": "bdev_nvme_attach_controller" 00:22:40.970 },{ 00:22:40.970 "params": { 00:22:40.970 "name": "Nvme2", 00:22:40.970 "trtype": "tcp", 00:22:40.970 "traddr": "10.0.0.2", 00:22:40.970 "adrfam": "ipv4", 00:22:40.970 "trsvcid": "4420", 00:22:40.970 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:40.970 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:40.970 "hdgst": false, 00:22:40.970 "ddgst": false 00:22:40.970 }, 00:22:40.970 "method": "bdev_nvme_attach_controller" 00:22:40.970 },{ 00:22:40.970 "params": { 00:22:40.970 "name": "Nvme3", 00:22:40.970 "trtype": "tcp", 00:22:40.970 "traddr": "10.0.0.2", 00:22:40.970 "adrfam": "ipv4", 00:22:40.970 "trsvcid": "4420", 00:22:40.970 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:40.970 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:40.970 "hdgst": false, 00:22:40.970 "ddgst": false 00:22:40.970 }, 00:22:40.970 "method": "bdev_nvme_attach_controller" 00:22:40.970 },{ 00:22:40.970 "params": { 00:22:40.970 "name": "Nvme4", 00:22:40.970 "trtype": "tcp", 00:22:40.970 "traddr": "10.0.0.2", 00:22:40.970 "adrfam": "ipv4", 00:22:40.970 "trsvcid": "4420", 00:22:40.970 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:40.970 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:40.970 "hdgst": false, 00:22:40.970 "ddgst": false 00:22:40.970 }, 00:22:40.970 "method": "bdev_nvme_attach_controller" 00:22:40.970 },{ 00:22:40.970 "params": { 00:22:40.970 "name": "Nvme5", 00:22:40.970 "trtype": "tcp", 00:22:40.970 "traddr": "10.0.0.2", 00:22:40.970 "adrfam": "ipv4", 00:22:40.970 "trsvcid": "4420", 00:22:40.971 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:40.971 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:40.971 "hdgst": false, 00:22:40.971 "ddgst": false 00:22:40.971 }, 00:22:40.971 "method": "bdev_nvme_attach_controller" 00:22:40.971 },{ 00:22:40.971 "params": { 00:22:40.971 "name": "Nvme6", 00:22:40.971 "trtype": "tcp", 00:22:40.971 "traddr": "10.0.0.2", 00:22:40.971 "adrfam": "ipv4", 00:22:40.971 "trsvcid": "4420", 00:22:40.971 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:40.971 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:40.971 "hdgst": false, 00:22:40.971 "ddgst": false 00:22:40.971 }, 00:22:40.971 "method": "bdev_nvme_attach_controller" 00:22:40.971 },{ 00:22:40.971 "params": { 00:22:40.971 "name": "Nvme7", 00:22:40.971 "trtype": "tcp", 00:22:40.971 "traddr": "10.0.0.2", 00:22:40.971 "adrfam": "ipv4", 00:22:40.971 "trsvcid": "4420", 00:22:40.971 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:40.971 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:40.971 "hdgst": false, 00:22:40.971 "ddgst": false 00:22:40.971 }, 00:22:40.971 "method": "bdev_nvme_attach_controller" 00:22:40.971 },{ 00:22:40.971 "params": { 00:22:40.971 "name": "Nvme8", 00:22:40.971 "trtype": "tcp", 00:22:40.971 "traddr": "10.0.0.2", 00:22:40.971 "adrfam": "ipv4", 00:22:40.971 "trsvcid": "4420", 00:22:40.971 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:40.971 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:40.971 "hdgst": false, 00:22:40.971 "ddgst": false 00:22:40.971 }, 00:22:40.971 "method": "bdev_nvme_attach_controller" 00:22:40.971 },{ 00:22:40.971 "params": { 00:22:40.971 "name": "Nvme9", 00:22:40.971 "trtype": "tcp", 00:22:40.971 "traddr": "10.0.0.2", 00:22:40.971 "adrfam": "ipv4", 00:22:40.971 "trsvcid": "4420", 00:22:40.971 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:40.971 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:40.971 "hdgst": false, 00:22:40.971 "ddgst": false 00:22:40.971 }, 00:22:40.971 "method": "bdev_nvme_attach_controller" 00:22:40.971 },{ 00:22:40.971 "params": { 00:22:40.971 "name": "Nvme10", 00:22:40.971 "trtype": "tcp", 00:22:40.971 "traddr": "10.0.0.2", 00:22:40.971 "adrfam": "ipv4", 00:22:40.971 "trsvcid": "4420", 00:22:40.971 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:40.971 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:40.971 "hdgst": false, 00:22:40.971 "ddgst": false 00:22:40.971 }, 00:22:40.971 "method": "bdev_nvme_attach_controller" 00:22:40.971 }' 00:22:40.971 [2024-11-26 07:32:09.053732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.237 [2024-11-26 07:32:09.095666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.613 Running I/O for 10 seconds... 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=80 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 80 -ge 100 ']' 00:22:42.872 07:32:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:43.131 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:43.131 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:43.131 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:43.131 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:43.131 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.131 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 796364 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 796364 ']' 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 796364 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 796364 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 796364' 00:22:43.414 killing process with pid 796364 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 796364 00:22:43.414 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 796364 00:22:43.415 [2024-11-26 07:32:11.302699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.302996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.303166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cd700 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.305938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.305977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.305985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.305992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.305999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.415 [2024-11-26 07:32:11.306108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.306375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d0180 is same with the state(6) to be set 00:22:43.416 [2024-11-26 07:32:11.307932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.307969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.307987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.307995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.416 [2024-11-26 07:32:11.308264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.416 [2024-11-26 07:32:11.308270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.417 [2024-11-26 07:32:11.308859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.417 [2024-11-26 07:32:11.308867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-11-26 07:32:11.308874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.418 [2024-11-26 07:32:11.308882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-11-26 07:32:11.308888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.418 [2024-11-26 07:32:11.308896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-11-26 07:32:11.308902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.418 [2024-11-26 07:32:11.308910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-11-26 07:32:11.308920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.418 [2024-11-26 07:32:11.308928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.418 [2024-11-26 07:32:11.308934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.418 [2024-11-26 07:32:11.308939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.308970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.308979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.308990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.308997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce0c0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.309837] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:43.418 [2024-11-26 07:32:11.310439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.310466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.310473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.310480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.310487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.310493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.310499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.310505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.310512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.418 [2024-11-26 07:32:11.310518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.310844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ce5b0 is same with the state(6) to be set 00:22:43.419 [2024-11-26 07:32:11.311480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:43.419 [2024-11-26 07:32:11.311537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cbd30 (9): Bad file descriptor 00:22:43.419 [2024-11-26 07:32:11.311657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-11-26 07:32:11.311670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.419 [2024-11-26 07:32:11.311681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-11-26 07:32:11.311689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.419 [2024-11-26 07:32:11.311698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-11-26 07:32:11.311705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.419 [2024-11-26 07:32:11.311714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-11-26 07:32:11.311721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.419 [2024-11-26 07:32:11.311729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-11-26 07:32:11.311737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.419 [2024-11-26 07:32:11.311745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-11-26 07:32:11.311756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.419 [2024-11-26 07:32:11.311764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.419 [2024-11-26 07:32:11.311771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.311988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.311995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:1[2024-11-26 07:32:11.312052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 he state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with t[2024-11-26 07:32:11.312071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:1he state(6) to be set 00:22:43.420 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with t[2024-11-26 07:32:11.312081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:43.420 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with t[2024-11-26 07:32:11.312129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:1he state(6) to be set 00:22:43.420 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 07:32:11.312174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 he state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.420 [2024-11-26 07:32:11.312223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.420 [2024-11-26 07:32:11.312230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.420 [2024-11-26 07:32:11.312237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:1[2024-11-26 07:32:11.312237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 he state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 07:32:11.312248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 he state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with t[2024-11-26 07:32:11.312259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:1he state(6) to be set 00:22:43.421 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 07:32:11.312321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 he state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with t[2024-11-26 07:32:11.312334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:1he state(6) to be set 00:22:43.421 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:1[2024-11-26 07:32:11.312372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 he state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 07:32:11.312381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 he state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with t[2024-11-26 07:32:11.312434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:22:43.421 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cee00 is same with the state(6) to be set 00:22:43.421 [2024-11-26 07:32:11.312533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.421 [2024-11-26 07:32:11.312622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.421 [2024-11-26 07:32:11.312628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.312987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.312994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.422 [2024-11-26 07:32:11.313367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.422 [2024-11-26 07:32:11.313373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.423 [2024-11-26 07:32:11.313389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.423 [2024-11-26 07:32:11.313403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.423 [2024-11-26 07:32:11.313418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.423 [2024-11-26 07:32:11.313432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.423 [2024-11-26 07:32:11.313449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.423 [2024-11-26 07:32:11.313463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.423 [2024-11-26 07:32:11.313478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.423 [2024-11-26 07:32:11.313492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.423 [2024-11-26 07:32:11.313507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.423 [2024-11-26 07:32:11.313512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.423 [2024-11-26 07:32:11.313536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.423 [2024-11-26 07:32:11.313544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.423 [2024-11-26 07:32:11.313841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.313848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.313854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.313890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.313940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.313997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.314049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.314098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.314156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.314208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.314260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.314310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.314363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf2d0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.315629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.326522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.326916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.424 [2024-11-26 07:32:11.326925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.424 [2024-11-26 07:32:11.328426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.328440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.424 [2024-11-26 07:32:11.328450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cf7c0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.328867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.328878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.328887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.328897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.328906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.328915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.328929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.328938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cc1b0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.328973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.328985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.328995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c00d0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.329076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3fe70 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.329195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26710 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.329304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1dfd0 is same with the state(6) to be set 00:22:43.425 [2024-11-26 07:32:11.329408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.425 [2024-11-26 07:32:11.329429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.425 [2024-11-26 07:32:11.329438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.426 [2024-11-26 07:32:11.329448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.426 [2024-11-26 07:32:11.329456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.426 [2024-11-26 07:32:11.329466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.426 [2024-11-26 07:32:11.329474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.426 [2024-11-26 07:32:11.329483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0610 is same with the state(6) to be set 00:22:43.426 [2024-11-26 07:32:11.329513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.426 [2024-11-26 07:32:11.329524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.426 [2024-11-26 07:32:11.329534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.426 [2024-11-26 07:32:11.329542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.426 [2024-11-26 07:32:11.329554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.426 [2024-11-26 07:32:11.329563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.426 [2024-11-26 07:32:11.329572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.426 [2024-11-26 07:32:11.329581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.426 [2024-11-26 07:32:11.329590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7a50 is same with the state(6) to be set 00:22:43.426 [2024-11-26 07:32:11.329614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.426 [2024-11-26 07:32:11.329625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.426 [2024-11-26 07:32:11.329635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.426 [2024-11-26 07:32:11.329644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.426 [2024-11-26 07:32:11.329653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.426 [2024-11-26 07:32:11.329663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.426 [2024-11-26 07:32:11.329673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.426 [2024-11-26 07:32:11.329682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.426 [2024-11-26 07:32:11.329690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c1660 is same with the state(6) to be set 00:22:43.426 [2024-11-26 07:32:11.332833] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:43.426 [2024-11-26 07:32:11.332864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:43.426 [2024-11-26 07:32:11.332880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:43.426 [2024-11-26 07:32:11.332897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf7a50 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.332912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c00d0 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.333173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.426 [2024-11-26 07:32:11.333191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6cbd30 with addr=10.0.0.2, port=4420 00:22:43.426 [2024-11-26 07:32:11.333201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cbd30 is same with the state(6) to be set 00:22:43.426 [2024-11-26 07:32:11.333805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cbd30 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.333886] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:43.426 [2024-11-26 07:32:11.334652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:43.426 [2024-11-26 07:32:11.334703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb12870 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.334876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.426 [2024-11-26 07:32:11.334892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c00d0 with addr=10.0.0.2, port=4420 00:22:43.426 [2024-11-26 07:32:11.334906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c00d0 is same with the state(6) to be set 00:22:43.426 [2024-11-26 07:32:11.335003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.426 [2024-11-26 07:32:11.335019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf7a50 with addr=10.0.0.2, port=4420 00:22:43.426 [2024-11-26 07:32:11.335028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7a50 is same with the state(6) to be set 00:22:43.426 [2024-11-26 07:32:11.335037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:43.426 [2024-11-26 07:32:11.335046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:43.426 [2024-11-26 07:32:11.335057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:43.426 [2024-11-26 07:32:11.335066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:43.426 [2024-11-26 07:32:11.335161] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:43.426 [2024-11-26 07:32:11.335217] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:43.426 [2024-11-26 07:32:11.335313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c00d0 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.335331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf7a50 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.335436] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:43.426 [2024-11-26 07:32:11.335629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.426 [2024-11-26 07:32:11.335644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb12870 with addr=10.0.0.2, port=4420 00:22:43.426 [2024-11-26 07:32:11.335653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb12870 is same with the state(6) to be set 00:22:43.426 [2024-11-26 07:32:11.335662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:43.426 [2024-11-26 07:32:11.335670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:43.426 [2024-11-26 07:32:11.335678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:43.426 [2024-11-26 07:32:11.335687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:43.426 [2024-11-26 07:32:11.335696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:43.426 [2024-11-26 07:32:11.335703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:43.426 [2024-11-26 07:32:11.335710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:43.426 [2024-11-26 07:32:11.335718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:43.426 [2024-11-26 07:32:11.335764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb12870 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.335801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:43.426 [2024-11-26 07:32:11.335809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:43.426 [2024-11-26 07:32:11.335817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:43.426 [2024-11-26 07:32:11.335824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:43.426 [2024-11-26 07:32:11.338821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cc1b0 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.338848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3fe70 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.338867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26710 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.338886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1dfd0 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.338904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e0610 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.338924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c1660 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.339055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:43.426 [2024-11-26 07:32:11.339331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.426 [2024-11-26 07:32:11.339347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6cbd30 with addr=10.0.0.2, port=4420 00:22:43.426 [2024-11-26 07:32:11.339356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cbd30 is same with the state(6) to be set 00:22:43.426 [2024-11-26 07:32:11.339393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cbd30 (9): Bad file descriptor 00:22:43.426 [2024-11-26 07:32:11.339430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:43.426 [2024-11-26 07:32:11.339439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:43.426 [2024-11-26 07:32:11.339447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:43.426 [2024-11-26 07:32:11.339454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:43.426 [2024-11-26 07:32:11.343909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:43.426 [2024-11-26 07:32:11.343926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:43.426 [2024-11-26 07:32:11.344152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.426 [2024-11-26 07:32:11.344169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf7a50 with addr=10.0.0.2, port=4420 00:22:43.426 [2024-11-26 07:32:11.344178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7a50 is same with the state(6) to be set 00:22:43.426 [2024-11-26 07:32:11.344401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.426 [2024-11-26 07:32:11.344414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c00d0 with addr=10.0.0.2, port=4420 00:22:43.426 [2024-11-26 07:32:11.344422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c00d0 is same with the state(6) to be set 00:22:43.427 [2024-11-26 07:32:11.344459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf7a50 (9): Bad file descriptor 00:22:43.427 [2024-11-26 07:32:11.344471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c00d0 (9): Bad file descriptor 00:22:43.427 [2024-11-26 07:32:11.344506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:43.427 [2024-11-26 07:32:11.344515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:43.427 [2024-11-26 07:32:11.344524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:43.427 [2024-11-26 07:32:11.344532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:43.427 [2024-11-26 07:32:11.344540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:43.427 [2024-11-26 07:32:11.344551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:43.427 [2024-11-26 07:32:11.344559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:43.427 [2024-11-26 07:32:11.344566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:43.427 [2024-11-26 07:32:11.345412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:43.427 [2024-11-26 07:32:11.345621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.427 [2024-11-26 07:32:11.345635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb12870 with addr=10.0.0.2, port=4420 00:22:43.427 [2024-11-26 07:32:11.345644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb12870 is same with the state(6) to be set 00:22:43.427 [2024-11-26 07:32:11.345682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb12870 (9): Bad file descriptor 00:22:43.427 [2024-11-26 07:32:11.345719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:43.427 [2024-11-26 07:32:11.345727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:43.427 [2024-11-26 07:32:11.345736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:43.427 [2024-11-26 07:32:11.345744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:43.427 [2024-11-26 07:32:11.348974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.348995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.427 [2024-11-26 07:32:11.349464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.427 [2024-11-26 07:32:11.349471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.349969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.349976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d0450 is same with the state(6) to be set 00:22:43.428 [2024-11-26 07:32:11.351021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.351035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.428 [2024-11-26 07:32:11.351047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.428 [2024-11-26 07:32:11.351054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.429 [2024-11-26 07:32:11.351634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.429 [2024-11-26 07:32:11.351641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.351986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.351993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.352001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb6a10 is same with the state(6) to be set 00:22:43.430 [2024-11-26 07:32:11.353021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.430 [2024-11-26 07:32:11.353259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.430 [2024-11-26 07:32:11.353266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-26 07:32:11.353858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-26 07:32:11.353865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.353873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.353879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.353887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.353894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.353902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.353908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.353916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.353923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.353931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.353938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.353946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.353960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.353968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.353974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.353982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.353989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.353996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9450 is same with the state(6) to be set 00:22:43.432 [2024-11-26 07:32:11.355017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-26 07:32:11.355413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-26 07:32:11.355421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.355991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.355998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.356005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbba980 is same with the state(6) to be set 00:22:43.433 [2024-11-26 07:32:11.357020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.357033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-26 07:32:11.357045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-26 07:32:11.357052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-26 07:32:11.357634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-26 07:32:11.357642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.357979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.357986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1b2a0 is same with the state(6) to be set 00:22:43.435 [2024-11-26 07:32:11.359009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.435 [2024-11-26 07:32:11.359253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.435 [2024-11-26 07:32:11.359262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.359826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.359833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.436 [2024-11-26 07:32:11.364791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.436 [2024-11-26 07:32:11.364803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.437 [2024-11-26 07:32:11.364813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.437 [2024-11-26 07:32:11.364819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.437 [2024-11-26 07:32:11.364827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.437 [2024-11-26 07:32:11.364834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.437 [2024-11-26 07:32:11.364842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.437 [2024-11-26 07:32:11.364848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.437 [2024-11-26 07:32:11.364857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.437 [2024-11-26 07:32:11.364863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.437 [2024-11-26 07:32:11.364871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.437 [2024-11-26 07:32:11.364878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.437 [2024-11-26 07:32:11.364899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.437 [2024-11-26 07:32:11.364906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.437 [2024-11-26 07:32:11.364914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.437 [2024-11-26 07:32:11.364920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.437 [2024-11-26 07:32:11.364928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.437 [2024-11-26 07:32:11.364935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.437 [2024-11-26 07:32:11.364942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb0040 is same with the state(6) to be set 00:22:43.437 [2024-11-26 07:32:11.365931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:43.437 [2024-11-26 07:32:11.365952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:43.437 [2024-11-26 07:32:11.365962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:43.437 [2024-11-26 07:32:11.365971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:43.437 [2024-11-26 07:32:11.366075] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:43.437 [2024-11-26 07:32:11.366090] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:43.437 [2024-11-26 07:32:11.366166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:43.437 task offset: 29312 on job bdev=Nvme2n1 fails 00:22:43.437 00:22:43.437 Latency(us) 00:22:43.437 [2024-11-26T06:32:11.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.437 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.437 Job: Nvme1n1 ended in about 0.93 seconds with error 00:22:43.437 Verification LBA range: start 0x0 length 0x400 00:22:43.437 Nvme1n1 : 0.93 205.62 12.85 68.54 0.00 231088.08 26442.35 207891.59 00:22:43.437 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.437 Job: Nvme2n1 ended in about 0.89 seconds with error 00:22:43.437 Verification LBA range: start 0x0 length 0x400 00:22:43.437 Nvme2n1 : 0.89 214.83 13.43 71.61 0.00 217113.49 4217.10 226127.69 00:22:43.437 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.437 Job: Nvme3n1 ended in about 0.94 seconds with error 00:22:43.437 Verification LBA range: start 0x0 length 0x400 00:22:43.437 Nvme3n1 : 0.94 205.18 12.82 68.39 0.00 223652.73 13563.10 219745.06 00:22:43.437 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.437 Job: Nvme4n1 ended in about 0.91 seconds with error 00:22:43.437 Verification LBA range: start 0x0 length 0x400 00:22:43.437 Nvme4n1 : 0.91 283.39 17.71 70.03 0.00 169702.34 7094.98 218833.25 00:22:43.437 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.437 Job: Nvme5n1 ended in about 0.92 seconds with error 00:22:43.437 Verification LBA range: start 0x0 length 0x400 00:22:43.437 Nvme5n1 : 0.92 209.80 13.11 69.93 0.00 210539.52 16868.40 225215.89 00:22:43.437 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.437 Job: Nvme6n1 ended in about 0.94 seconds with error 00:22:43.437 Verification LBA range: start 0x0 length 0x400 00:22:43.437 Nvme6n1 : 0.94 209.01 13.06 68.25 0.00 208906.85 20287.67 217009.64 00:22:43.437 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.437 Job: Nvme7n1 ended in about 0.94 seconds with error 00:22:43.437 Verification LBA range: start 0x0 length 0x400 00:22:43.437 Nvme7n1 : 0.94 204.31 12.77 68.10 0.00 208713.79 13449.13 223392.28 00:22:43.437 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.437 Job: Nvme8n1 ended in about 0.94 seconds with error 00:22:43.437 Verification LBA range: start 0x0 length 0x400 00:22:43.437 Nvme8n1 : 0.94 203.88 12.74 67.96 0.00 205373.66 23251.03 209715.20 00:22:43.437 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.437 Verification LBA range: start 0x0 length 0x400 00:22:43.437 Nvme9n1 : 0.91 210.67 13.17 0.00 0.00 258507.76 18805.98 235245.75 00:22:43.437 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:43.437 Job: Nvme10n1 ended in about 0.95 seconds with error 00:22:43.437 Verification LBA range: start 0x0 length 0x400 00:22:43.437 Nvme10n1 : 0.95 134.92 8.43 67.46 0.00 265863.94 18008.15 249834.63 00:22:43.437 [2024-11-26T06:32:11.537Z] =================================================================================================================== 00:22:43.437 [2024-11-26T06:32:11.537Z] Total : 2081.60 130.10 620.27 0.00 216423.58 4217.10 249834.63 00:22:43.437 [2024-11-26 07:32:11.399589] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:43.437 [2024-11-26 07:32:11.399638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:43.437 [2024-11-26 07:32:11.399982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.437 [2024-11-26 07:32:11.400002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6cc1b0 with addr=10.0.0.2, port=4420 00:22:43.437 [2024-11-26 07:32:11.400013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cc1b0 is same with the state(6) to be set 00:22:43.437 [2024-11-26 07:32:11.400232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.437 [2024-11-26 07:32:11.400244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c1660 with addr=10.0.0.2, port=4420 00:22:43.437 [2024-11-26 07:32:11.400252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c1660 is same with the state(6) to be set 00:22:43.437 [2024-11-26 07:32:11.400388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.437 [2024-11-26 07:32:11.400398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb1dfd0 with addr=10.0.0.2, port=4420 00:22:43.437 [2024-11-26 07:32:11.400406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1dfd0 is same with the state(6) to be set 00:22:43.437 [2024-11-26 07:32:11.400607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.437 [2024-11-26 07:32:11.400617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e0610 with addr=10.0.0.2, port=4420 00:22:43.437 [2024-11-26 07:32:11.400625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e0610 is same with the state(6) to be set 00:22:43.437 [2024-11-26 07:32:11.402008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:43.437 [2024-11-26 07:32:11.402025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:43.437 [2024-11-26 07:32:11.402034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:43.437 [2024-11-26 07:32:11.402043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:43.437 [2024-11-26 07:32:11.402317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.437 [2024-11-26 07:32:11.402331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3fe70 with addr=10.0.0.2, port=4420 00:22:43.437 [2024-11-26 07:32:11.402343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3fe70 is same with the state(6) to be set 00:22:43.437 [2024-11-26 07:32:11.402492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.437 [2024-11-26 07:32:11.402503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb26710 with addr=10.0.0.2, port=4420 00:22:43.437 [2024-11-26 07:32:11.402510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26710 is same with the state(6) to be set 00:22:43.438 [2024-11-26 07:32:11.402523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cc1b0 (9): Bad file descriptor 00:22:43.438 [2024-11-26 07:32:11.402534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c1660 (9): Bad file descriptor 00:22:43.438 [2024-11-26 07:32:11.402543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1dfd0 (9): Bad file descriptor 00:22:43.438 [2024-11-26 07:32:11.402552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e0610 (9): Bad file descriptor 00:22:43.438 [2024-11-26 07:32:11.402583] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:43.438 [2024-11-26 07:32:11.402593] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:22:43.438 [2024-11-26 07:32:11.402604] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:43.438 [2024-11-26 07:32:11.402613] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:43.438 [2024-11-26 07:32:11.403065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.438 [2024-11-26 07:32:11.403083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6cbd30 with addr=10.0.0.2, port=4420 00:22:43.438 [2024-11-26 07:32:11.403091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6cbd30 is same with the state(6) to be set 00:22:43.438 [2024-11-26 07:32:11.403285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.438 [2024-11-26 07:32:11.403296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c00d0 with addr=10.0.0.2, port=4420 00:22:43.438 [2024-11-26 07:32:11.403303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c00d0 is same with the state(6) to be set 00:22:43.438 [2024-11-26 07:32:11.403442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.438 [2024-11-26 07:32:11.403452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf7a50 with addr=10.0.0.2, port=4420 00:22:43.438 [2024-11-26 07:32:11.403459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf7a50 is same with the state(6) to be set 00:22:43.438 [2024-11-26 07:32:11.403603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.438 [2024-11-26 07:32:11.403613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb12870 with addr=10.0.0.2, port=4420 00:22:43.438 [2024-11-26 07:32:11.403620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb12870 is same with the state(6) to be set 00:22:43.438 [2024-11-26 07:32:11.403630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3fe70 (9): Bad file descriptor 00:22:43.438 [2024-11-26 07:32:11.403639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26710 (9): Bad file descriptor 00:22:43.438 [2024-11-26 07:32:11.403647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:43.438 [2024-11-26 07:32:11.403654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:43.438 [2024-11-26 07:32:11.403665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:43.438 [2024-11-26 07:32:11.403674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:43.438 [2024-11-26 07:32:11.403682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:43.438 [2024-11-26 07:32:11.403687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:43.438 [2024-11-26 07:32:11.403694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:43.438 [2024-11-26 07:32:11.403700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:43.438 [2024-11-26 07:32:11.403706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:43.438 [2024-11-26 07:32:11.403712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:43.438 [2024-11-26 07:32:11.403718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:43.438 [2024-11-26 07:32:11.403724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:43.438 [2024-11-26 07:32:11.403730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:43.438 [2024-11-26 07:32:11.403736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:43.438 [2024-11-26 07:32:11.403742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:43.438 [2024-11-26 07:32:11.403748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:43.438 [2024-11-26 07:32:11.403825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cbd30 (9): Bad file descriptor 00:22:43.438 [2024-11-26 07:32:11.403835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c00d0 (9): Bad file descriptor 00:22:43.438 [2024-11-26 07:32:11.403843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf7a50 (9): Bad file descriptor 00:22:43.438 [2024-11-26 07:32:11.403851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb12870 (9): Bad file descriptor 00:22:43.438 [2024-11-26 07:32:11.403859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:43.438 [2024-11-26 07:32:11.403865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:43.438 [2024-11-26 07:32:11.403871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:43.438 [2024-11-26 07:32:11.403877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:43.438 [2024-11-26 07:32:11.403884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:43.438 [2024-11-26 07:32:11.403889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:43.438 [2024-11-26 07:32:11.403895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:43.438 [2024-11-26 07:32:11.403901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:43.438 [2024-11-26 07:32:11.403923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:43.438 [2024-11-26 07:32:11.403930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:43.438 [2024-11-26 07:32:11.403936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:43.438 [2024-11-26 07:32:11.403945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:43.438 [2024-11-26 07:32:11.403958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:43.438 [2024-11-26 07:32:11.403963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:43.438 [2024-11-26 07:32:11.403970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:43.438 [2024-11-26 07:32:11.403976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:43.438 [2024-11-26 07:32:11.403983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:43.438 [2024-11-26 07:32:11.403989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:43.438 [2024-11-26 07:32:11.403995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:43.438 [2024-11-26 07:32:11.404001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:43.438 [2024-11-26 07:32:11.404007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:43.438 [2024-11-26 07:32:11.404013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:43.438 [2024-11-26 07:32:11.404019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:43.438 [2024-11-26 07:32:11.404025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:43.698 07:32:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:44.631 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 796602 00:22:44.631 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:44.631 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 796602 00:22:44.631 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:44.631 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.631 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:44.631 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:44.631 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 796602 00:22:44.631 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:44.631 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:44.631 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:44.632 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:44.632 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:44.632 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:44.632 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:44.632 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.890 rmmod nvme_tcp 00:22:44.890 rmmod nvme_fabrics 00:22:44.890 rmmod nvme_keyring 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 796364 ']' 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 796364 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 796364 ']' 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 796364 00:22:44.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (796364) - No such process 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 796364 is not found' 00:22:44.890 Process with pid 796364 is not found 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.890 07:32:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.794 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.795 00:22:46.795 real 0m7.034s 00:22:46.795 user 0m16.140s 00:22:46.795 sys 0m1.313s 00:22:46.795 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.795 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.795 ************************************ 00:22:46.795 END TEST nvmf_shutdown_tc3 00:22:46.795 ************************************ 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:47.054 ************************************ 00:22:47.054 START TEST nvmf_shutdown_tc4 00:22:47.054 ************************************ 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:47.054 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:47.055 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:47.055 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:47.055 Found net devices under 0000:86:00.0: cvl_0_0 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:47.055 Found net devices under 0000:86:00.1: cvl_0_1 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:47.055 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:47.055 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.055 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.055 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.055 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.055 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:47.055 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:47.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:22:47.314 00:22:47.314 --- 10.0.0.2 ping statistics --- 00:22:47.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.314 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:22:47.314 00:22:47.314 --- 10.0.0.1 ping statistics --- 00:22:47.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.314 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=797705 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 797705 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 797705 ']' 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.314 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:47.314 [2024-11-26 07:32:15.330245] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:22:47.314 [2024-11-26 07:32:15.330292] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.314 [2024-11-26 07:32:15.398092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.573 [2024-11-26 07:32:15.439873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.573 [2024-11-26 07:32:15.439911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.573 [2024-11-26 07:32:15.439919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.573 [2024-11-26 07:32:15.439925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.573 [2024-11-26 07:32:15.439930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.573 [2024-11-26 07:32:15.441600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.573 [2024-11-26 07:32:15.441696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.573 [2024-11-26 07:32:15.441804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.573 [2024-11-26 07:32:15.441805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:47.573 [2024-11-26 07:32:15.589813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:47.573 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:47.574 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.574 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:47.832 Malloc1 00:22:47.832 [2024-11-26 07:32:15.701456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.832 Malloc2 00:22:47.832 Malloc3 00:22:47.832 Malloc4 00:22:47.832 Malloc5 00:22:47.832 Malloc6 00:22:48.091 Malloc7 00:22:48.091 Malloc8 00:22:48.091 Malloc9 00:22:48.091 Malloc10 00:22:48.091 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.091 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:48.091 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.091 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:48.091 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=797923 00:22:48.091 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:48.091 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:48.091 [2024-11-26 07:32:16.185153] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 797705 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 797705 ']' 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 797705 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 797705 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 797705' 00:22:53.365 killing process with pid 797705 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 797705 00:22:53.365 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 797705 00:22:53.365 [2024-11-26 07:32:21.200381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0550 is same with the state(6) to be set 00:22:53.365 [2024-11-26 07:32:21.200425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0550 is same with the state(6) to be set 00:22:53.365 [2024-11-26 07:32:21.200433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0550 is same with the state(6) to be set 00:22:53.365 [2024-11-26 07:32:21.201318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b2250 is same with the state(6) to be set 00:22:53.365 [2024-11-26 07:32:21.201347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b2250 is same with the state(6) to be set 00:22:53.365 [2024-11-26 07:32:21.201356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b2250 is same with the state(6) to be set 00:22:53.365 [2024-11-26 07:32:21.201362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b2250 is same with the state(6) to be set 00:22:53.365 [2024-11-26 07:32:21.201374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b2250 is same with the state(6) to be set 00:22:53.365 [2024-11-26 07:32:21.201381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b2250 is same with the state(6) to be set 00:22:53.365 [2024-11-26 07:32:21.201387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b2250 is same with the state(6) to be set 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 Write completed with error (sct=0, sc=8) 00:22:53.365 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 [2024-11-26 07:32:21.203068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:53.366 NVMe io qpair process completion error 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 [2024-11-26 07:32:21.203874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae7d0 is same with the state(6) to be set 00:22:53.366 [2024-11-26 07:32:21.203889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae7d0 is same with the state(6) to be set 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 [2024-11-26 07:32:21.203896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae7d0 is same with the state(6) to be set 00:22:53.366 [2024-11-26 07:32:21.203903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae7d0 is same with the state(6) to be set 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 [2024-11-26 07:32:21.203909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae7d0 is same with the state(6) to be set 00:22:53.366 [2024-11-26 07:32:21.203915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae7d0 is same with the state(6) to be set 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 [2024-11-26 07:32:21.203922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae7d0 is same with the state(6) to be set 00:22:53.366 [2024-11-26 07:32:21.203928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae7d0 is same with the state(6) to be set 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 [2024-11-26 07:32:21.204124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 [2024-11-26 07:32:21.204821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af680 is same with the state(6) to be set 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 [2024-11-26 07:32:21.204845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af680 is same with the state(6) to be set 00:22:53.366 [2024-11-26 07:32:21.204854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af680 is same with the state(6) to be set 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 [2024-11-26 07:32:21.204861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af680 is same with the state(6) to be set 00:22:53.366 starting I/O failed: -6 00:22:53.366 [2024-11-26 07:32:21.204869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af680 is same with the state(6) to be set 00:22:53.366 [2024-11-26 07:32:21.204876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af680 is same with the state(6) to be set 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 [2024-11-26 07:32:21.204918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 starting I/O failed: -6 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.366 Write completed with error (sct=0, sc=8) 00:22:53.367 [2024-11-26 07:32:21.205226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afb70 is same with the state(6) to be set 00:22:53.367 starting I/O failed: -6 00:22:53.367 [2024-11-26 07:32:21.205239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afb70 is same with the state(6) to be set 00:22:53.367 [2024-11-26 07:32:21.205246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afb70 is same with the state(6) to be set 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 [2024-11-26 07:32:21.205252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afb70 is same with tstarting I/O failed: -6 00:22:53.367 he state(6) to be set 00:22:53.367 [2024-11-26 07:32:21.205260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afb70 is same with the state(6) to be set 00:22:53.367 [2024-11-26 07:32:21.205266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afb70 is same with the state(6) to be set 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 [2024-11-26 07:32:21.205970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 [2024-11-26 07:32:21.207560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:53.367 NVMe io qpair process completion error 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 starting I/O failed: -6 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.367 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 [2024-11-26 07:32:21.208155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa457d0 is same with the state(6) to be set 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 [2024-11-26 07:32:21.208179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa457d0 is same with the state(6) to be set 00:22:53.368 [2024-11-26 07:32:21.208186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa457d0 is same with the state(6) to be set 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 [2024-11-26 07:32:21.208193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa457d0 is same with the state(6) to be set 00:22:53.368 [2024-11-26 07:32:21.208200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa457d0 is same with the state(6) to be set 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 [2024-11-26 07:32:21.208207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa457d0 is same with the state(6) to be set 00:22:53.368 [2024-11-26 07:32:21.208213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa457d0 is same with the state(6) to be set 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 [2024-11-26 07:32:21.208219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa457d0 is same with tstarting I/O failed: -6 00:22:53.368 he state(6) to be set 00:22:53.368 [2024-11-26 07:32:21.208231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa457d0 is same with the state(6) to be set 00:22:53.368 [2024-11-26 07:32:21.208237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa457d0 is same with the state(6) to be set 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 [2024-11-26 07:32:21.208558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:53.368 starting I/O failed: -6 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 [2024-11-26 07:32:21.209601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 [2024-11-26 07:32:21.210660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.368 Write completed with error (sct=0, sc=8) 00:22:53.368 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 [2024-11-26 07:32:21.212442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:53.369 NVMe io qpair process completion error 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 [2024-11-26 07:32:21.213423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:53.369 starting I/O failed: -6 00:22:53.369 starting I/O failed: -6 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.369 starting I/O failed: -6 00:22:53.369 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 [2024-11-26 07:32:21.214385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 [2024-11-26 07:32:21.215438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.370 Write completed with error (sct=0, sc=8) 00:22:53.370 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 [2024-11-26 07:32:21.217391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:53.371 NVMe io qpair process completion error 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 [2024-11-26 07:32:21.218549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 [2024-11-26 07:32:21.219364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.371 Write completed with error (sct=0, sc=8) 00:22:53.371 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 [2024-11-26 07:32:21.220430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 [2024-11-26 07:32:21.222309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:53.372 NVMe io qpair process completion error 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 [2024-11-26 07:32:21.223364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 Write completed with error (sct=0, sc=8) 00:22:53.372 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 [2024-11-26 07:32:21.224298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 [2024-11-26 07:32:21.225314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.373 starting I/O failed: -6 00:22:53.373 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 [2024-11-26 07:32:21.232178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:53.374 NVMe io qpair process completion error 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 [2024-11-26 07:32:21.233156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 [2024-11-26 07:32:21.234103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 starting I/O failed: -6 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.374 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 [2024-11-26 07:32:21.235156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 [2024-11-26 07:32:21.236731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:53.375 NVMe io qpair process completion error 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 [2024-11-26 07:32:21.237813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 starting I/O failed: -6 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.375 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 [2024-11-26 07:32:21.238759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 [2024-11-26 07:32:21.239801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.376 Write completed with error (sct=0, sc=8) 00:22:53.376 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 [2024-11-26 07:32:21.241925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:53.377 NVMe io qpair process completion error 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 [2024-11-26 07:32:21.243058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 [2024-11-26 07:32:21.243915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 Write completed with error (sct=0, sc=8) 00:22:53.377 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 [2024-11-26 07:32:21.244973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 [2024-11-26 07:32:21.253957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:53.378 NVMe io qpair process completion error 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.378 Write completed with error (sct=0, sc=8) 00:22:53.378 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.379 starting I/O failed: -6 00:22:53.379 Write completed with error (sct=0, sc=8) 00:22:53.380 starting I/O failed: -6 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 starting I/O failed: -6 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 starting I/O failed: -6 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 starting I/O failed: -6 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 starting I/O failed: -6 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Write completed with error (sct=0, sc=8) 00:22:53.380 Initializing NVMe Controllers 00:22:53.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:53.380 Controller IO queue size 128, less than required. 00:22:53.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:53.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:53.380 Controller IO queue size 128, less than required. 00:22:53.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:53.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:53.380 Controller IO queue size 128, less than required. 00:22:53.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:53.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:53.380 Controller IO queue size 128, less than required. 00:22:53.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:53.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:53.380 Controller IO queue size 128, less than required. 00:22:53.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:53.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:53.380 Controller IO queue size 128, less than required. 00:22:53.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:53.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:53.380 Controller IO queue size 128, less than required. 00:22:53.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:53.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:53.380 Controller IO queue size 128, less than required. 00:22:53.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:53.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:53.380 Controller IO queue size 128, less than required. 00:22:53.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:53.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:53.380 Controller IO queue size 128, less than required. 00:22:53.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:53.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:53.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:53.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:53.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:53.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:53.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:53.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:53.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:53.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:53.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:53.380 Initialization complete. Launching workers. 00:22:53.380 ======================================================== 00:22:53.380 Latency(us) 00:22:53.380 Device Information : IOPS MiB/s Average min max 00:22:53.380 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2122.19 91.19 60320.00 714.47 112916.49 00:22:53.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2124.92 91.31 60254.93 780.67 115539.42 00:22:53.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2155.85 92.63 59411.82 853.25 118214.97 00:22:53.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2149.32 92.35 59684.49 738.76 128123.17 00:22:53.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2122.19 91.19 60294.51 453.42 107164.24 00:22:53.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2132.07 91.61 59439.24 689.72 105705.71 00:22:53.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2131.86 91.60 59457.52 678.53 104889.53 00:22:53.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2137.96 91.87 59300.69 675.82 104111.93 00:22:53.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2194.34 94.29 57793.45 710.16 103788.51 00:22:53.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2177.30 93.56 58259.22 936.09 106078.14 00:22:53.381 ======================================================== 00:22:53.381 Total : 21448.01 921.59 59413.48 453.42 128123.17 00:22:53.381 00:22:53.381 [2024-11-26 07:32:21.263992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1467ef0 is same with the state(6) to be set 00:22:53.381 [2024-11-26 07:32:21.264058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1467890 is same with the state(6) to be set 00:22:53.381 [2024-11-26 07:32:21.264097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1467bc0 is same with the state(6) to be set 00:22:53.381 [2024-11-26 07:32:21.264134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469900 is same with the state(6) to be set 00:22:53.381 [2024-11-26 07:32:21.264171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469ae0 is same with the state(6) to be set 00:22:53.381 [2024-11-26 07:32:21.264208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1467560 is same with the state(6) to be set 00:22:53.381 [2024-11-26 07:32:21.264244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468410 is same with the state(6) to be set 00:22:53.381 [2024-11-26 07:32:21.264281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468a70 is same with the state(6) to be set 00:22:53.381 [2024-11-26 07:32:21.264317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1469720 is same with the state(6) to be set 00:22:53.381 [2024-11-26 07:32:21.264354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1468740 is same with the state(6) to be set 00:22:53.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:53.640 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 797923 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 797923 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 797923 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.579 rmmod nvme_tcp 00:22:54.579 rmmod nvme_fabrics 00:22:54.579 rmmod nvme_keyring 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 797705 ']' 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 797705 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 797705 ']' 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 797705 00:22:54.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (797705) - No such process 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 797705 is not found' 00:22:54.579 Process with pid 797705 is not found 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.579 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.116 00:22:57.116 real 0m9.768s 00:22:57.116 user 0m24.916s 00:22:57.116 sys 0m5.188s 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.116 ************************************ 00:22:57.116 END TEST nvmf_shutdown_tc4 00:22:57.116 ************************************ 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:57.116 00:22:57.116 real 0m39.070s 00:22:57.116 user 1m35.928s 00:22:57.116 sys 0m13.550s 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:57.116 ************************************ 00:22:57.116 END TEST nvmf_shutdown 00:22:57.116 ************************************ 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:57.116 ************************************ 00:22:57.116 START TEST nvmf_nsid 00:22:57.116 ************************************ 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:57.116 * Looking for test storage... 00:22:57.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:57.116 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.116 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:57.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.117 --rc genhtml_branch_coverage=1 00:22:57.117 --rc genhtml_function_coverage=1 00:22:57.117 --rc genhtml_legend=1 00:22:57.117 --rc geninfo_all_blocks=1 00:22:57.117 --rc geninfo_unexecuted_blocks=1 00:22:57.117 00:22:57.117 ' 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:57.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.117 --rc genhtml_branch_coverage=1 00:22:57.117 --rc genhtml_function_coverage=1 00:22:57.117 --rc genhtml_legend=1 00:22:57.117 --rc geninfo_all_blocks=1 00:22:57.117 --rc geninfo_unexecuted_blocks=1 00:22:57.117 00:22:57.117 ' 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:57.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.117 --rc genhtml_branch_coverage=1 00:22:57.117 --rc genhtml_function_coverage=1 00:22:57.117 --rc genhtml_legend=1 00:22:57.117 --rc geninfo_all_blocks=1 00:22:57.117 --rc geninfo_unexecuted_blocks=1 00:22:57.117 00:22:57.117 ' 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:57.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.117 --rc genhtml_branch_coverage=1 00:22:57.117 --rc genhtml_function_coverage=1 00:22:57.117 --rc genhtml_legend=1 00:22:57.117 --rc geninfo_all_blocks=1 00:22:57.117 --rc geninfo_unexecuted_blocks=1 00:22:57.117 00:22:57.117 ' 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:57.117 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:57.118 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.118 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:57.118 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:57.118 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:57.118 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.118 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.118 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.118 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:57.118 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:57.118 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.118 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:02.385 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.385 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.385 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.385 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.385 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.385 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.385 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.385 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.385 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.385 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:02.385 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:02.386 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:02.386 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:02.386 Found net devices under 0000:86:00.0: cvl_0_0 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:02.386 Found net devices under 0000:86:00.1: cvl_0_1 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:02.386 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:02.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:23:02.646 00:23:02.646 --- 10.0.0.2 ping statistics --- 00:23:02.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.646 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:23:02.646 00:23:02.646 --- 10.0.0.1 ping statistics --- 00:23:02.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.646 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=802376 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 802376 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 802376 ']' 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.646 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:02.646 [2024-11-26 07:32:30.628486] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:23:02.646 [2024-11-26 07:32:30.628537] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.646 [2024-11-26 07:32:30.692436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.646 [2024-11-26 07:32:30.733990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.646 [2024-11-26 07:32:30.734027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.646 [2024-11-26 07:32:30.734034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.646 [2024-11-26 07:32:30.734040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.646 [2024-11-26 07:32:30.734045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.647 [2024-11-26 07:32:30.734605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=802404 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c4553a9a-932c-40a1-b70c-667c949e4d15 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=f8e224e3-ab78-454c-a421-8878785b0f2b 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=a4728250-5335-43a4-92fa-b4a2c9c84883 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:02.906 null0 00:23:02.906 null1 00:23:02.906 [2024-11-26 07:32:30.918461] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:23:02.906 [2024-11-26 07:32:30.918505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802404 ] 00:23:02.906 null2 00:23:02.906 [2024-11-26 07:32:30.923464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.906 [2024-11-26 07:32:30.947661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.906 [2024-11-26 07:32:30.982458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 802404 /var/tmp/tgt2.sock 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 802404 ']' 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:02.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.906 07:32:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:03.165 [2024-11-26 07:32:31.026790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.165 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.165 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:03.165 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:03.734 [2024-11-26 07:32:31.550430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.734 [2024-11-26 07:32:31.566539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:03.734 nvme0n1 nvme0n2 00:23:03.734 nvme1n1 00:23:03.734 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:03.734 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:03.734 07:32:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:04.670 07:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:05.607 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:05.607 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:05.607 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:05.607 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:05.607 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:05.607 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid c4553a9a-932c-40a1-b70c-667c949e4d15 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c4553a9a932c40a1b70c667c949e4d15 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C4553A9A932C40A1B70C667C949E4D15 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ C4553A9A932C40A1B70C667C949E4D15 == \C\4\5\5\3\A\9\A\9\3\2\C\4\0\A\1\B\7\0\C\6\6\7\C\9\4\9\E\4\D\1\5 ]] 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid f8e224e3-ab78-454c-a421-8878785b0f2b 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f8e224e3ab78454ca4218878785b0f2b 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F8E224E3AB78454CA4218878785B0F2B 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ F8E224E3AB78454CA4218878785B0F2B == \F\8\E\2\2\4\E\3\A\B\7\8\4\5\4\C\A\4\2\1\8\8\7\8\7\8\5\B\0\F\2\B ]] 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid a4728250-5335-43a4-92fa-b4a2c9c84883 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a4728250533543a492fab4a2c9c84883 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A4728250533543A492FAB4A2C9C84883 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ A4728250533543A492FAB4A2C9C84883 == \A\4\7\2\8\2\5\0\5\3\3\5\4\3\A\4\9\2\F\A\B\4\A\2\C\9\C\8\4\8\8\3 ]] 00:23:05.866 07:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 802404 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 802404 ']' 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 802404 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 802404 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 802404' 00:23:06.184 killing process with pid 802404 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 802404 00:23:06.184 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 802404 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:06.480 rmmod nvme_tcp 00:23:06.480 rmmod nvme_fabrics 00:23:06.480 rmmod nvme_keyring 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 802376 ']' 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 802376 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 802376 ']' 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 802376 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.480 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 802376 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 802376' 00:23:06.757 killing process with pid 802376 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 802376 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 802376 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.757 07:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.297 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.297 00:23:09.297 real 0m11.950s 00:23:09.297 user 0m9.432s 00:23:09.297 sys 0m5.247s 00:23:09.297 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.297 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:09.297 ************************************ 00:23:09.297 END TEST nvmf_nsid 00:23:09.297 ************************************ 00:23:09.297 07:32:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:09.297 00:23:09.297 real 11m40.807s 00:23:09.297 user 25m17.212s 00:23:09.297 sys 3m35.061s 00:23:09.297 07:32:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.297 07:32:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:09.297 ************************************ 00:23:09.297 END TEST nvmf_target_extra 00:23:09.297 ************************************ 00:23:09.297 07:32:36 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:09.297 07:32:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:09.297 07:32:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.297 07:32:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.297 ************************************ 00:23:09.297 START TEST nvmf_host 00:23:09.297 ************************************ 00:23:09.297 07:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:09.297 * Looking for test storage... 00:23:09.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:09.297 07:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:09.297 07:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:09.297 07:32:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:09.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.297 --rc genhtml_branch_coverage=1 00:23:09.297 --rc genhtml_function_coverage=1 00:23:09.297 --rc genhtml_legend=1 00:23:09.297 --rc geninfo_all_blocks=1 00:23:09.297 --rc geninfo_unexecuted_blocks=1 00:23:09.297 00:23:09.297 ' 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:09.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.297 --rc genhtml_branch_coverage=1 00:23:09.297 --rc genhtml_function_coverage=1 00:23:09.297 --rc genhtml_legend=1 00:23:09.297 --rc geninfo_all_blocks=1 00:23:09.297 --rc geninfo_unexecuted_blocks=1 00:23:09.297 00:23:09.297 ' 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:09.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.297 --rc genhtml_branch_coverage=1 00:23:09.297 --rc genhtml_function_coverage=1 00:23:09.297 --rc genhtml_legend=1 00:23:09.297 --rc geninfo_all_blocks=1 00:23:09.297 --rc geninfo_unexecuted_blocks=1 00:23:09.297 00:23:09.297 ' 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:09.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.297 --rc genhtml_branch_coverage=1 00:23:09.297 --rc genhtml_function_coverage=1 00:23:09.297 --rc genhtml_legend=1 00:23:09.297 --rc geninfo_all_blocks=1 00:23:09.297 --rc geninfo_unexecuted_blocks=1 00:23:09.297 00:23:09.297 ' 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.297 07:32:37 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.298 ************************************ 00:23:09.298 START TEST nvmf_multicontroller 00:23:09.298 ************************************ 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:09.298 * Looking for test storage... 00:23:09.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.298 --rc genhtml_branch_coverage=1 00:23:09.298 --rc genhtml_function_coverage=1 00:23:09.298 --rc genhtml_legend=1 00:23:09.298 --rc geninfo_all_blocks=1 00:23:09.298 --rc geninfo_unexecuted_blocks=1 00:23:09.298 00:23:09.298 ' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.298 --rc genhtml_branch_coverage=1 00:23:09.298 --rc genhtml_function_coverage=1 00:23:09.298 --rc genhtml_legend=1 00:23:09.298 --rc geninfo_all_blocks=1 00:23:09.298 --rc geninfo_unexecuted_blocks=1 00:23:09.298 00:23:09.298 ' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.298 --rc genhtml_branch_coverage=1 00:23:09.298 --rc genhtml_function_coverage=1 00:23:09.298 --rc genhtml_legend=1 00:23:09.298 --rc geninfo_all_blocks=1 00:23:09.298 --rc geninfo_unexecuted_blocks=1 00:23:09.298 00:23:09.298 ' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.298 --rc genhtml_branch_coverage=1 00:23:09.298 --rc genhtml_function_coverage=1 00:23:09.298 --rc genhtml_legend=1 00:23:09.298 --rc geninfo_all_blocks=1 00:23:09.298 --rc geninfo_unexecuted_blocks=1 00:23:09.298 00:23:09.298 ' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.298 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.299 07:32:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:15.872 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:15.872 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.872 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:15.873 Found net devices under 0000:86:00.0: cvl_0_0 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:15.873 Found net devices under 0000:86:00.1: cvl_0_1 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.873 07:32:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:23:15.873 00:23:15.873 --- 10.0.0.2 ping statistics --- 00:23:15.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.873 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:23:15.873 00:23:15.873 --- 10.0.0.1 ping statistics --- 00:23:15.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.873 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=806715 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 806715 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 806715 ']' 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.873 [2024-11-26 07:32:43.118998] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:23:15.873 [2024-11-26 07:32:43.119041] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.873 [2024-11-26 07:32:43.185673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:15.873 [2024-11-26 07:32:43.229152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.873 [2024-11-26 07:32:43.229193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.873 [2024-11-26 07:32:43.229200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.873 [2024-11-26 07:32:43.229207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.873 [2024-11-26 07:32:43.229212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.873 [2024-11-26 07:32:43.230655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.873 [2024-11-26 07:32:43.230720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.873 [2024-11-26 07:32:43.230721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.873 [2024-11-26 07:32:43.378931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.873 Malloc0 00:23:15.873 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.874 [2024-11-26 07:32:43.440279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.874 [2024-11-26 07:32:43.448204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.874 Malloc1 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=806736 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 806736 /var/tmp/bdevperf.sock 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 806736 ']' 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.874 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.133 NVMe0n1 00:23:16.133 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.133 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.133 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.133 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:16.133 07:32:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.133 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.133 1 00:23:16.133 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:16.133 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:16.133 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.134 request: 00:23:16.134 { 00:23:16.134 "name": "NVMe0", 00:23:16.134 "trtype": "tcp", 00:23:16.134 "traddr": "10.0.0.2", 00:23:16.134 "adrfam": "ipv4", 00:23:16.134 "trsvcid": "4420", 00:23:16.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.134 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:16.134 "hostaddr": "10.0.0.1", 00:23:16.134 "prchk_reftag": false, 00:23:16.134 "prchk_guard": false, 00:23:16.134 "hdgst": false, 00:23:16.134 "ddgst": false, 00:23:16.134 "allow_unrecognized_csi": false, 00:23:16.134 "method": "bdev_nvme_attach_controller", 00:23:16.134 "req_id": 1 00:23:16.134 } 00:23:16.134 Got JSON-RPC error response 00:23:16.134 response: 00:23:16.134 { 00:23:16.134 "code": -114, 00:23:16.134 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:16.134 } 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.134 request: 00:23:16.134 { 00:23:16.134 "name": "NVMe0", 00:23:16.134 "trtype": "tcp", 00:23:16.134 "traddr": "10.0.0.2", 00:23:16.134 "adrfam": "ipv4", 00:23:16.134 "trsvcid": "4420", 00:23:16.134 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:16.134 "hostaddr": "10.0.0.1", 00:23:16.134 "prchk_reftag": false, 00:23:16.134 "prchk_guard": false, 00:23:16.134 "hdgst": false, 00:23:16.134 "ddgst": false, 00:23:16.134 "allow_unrecognized_csi": false, 00:23:16.134 "method": "bdev_nvme_attach_controller", 00:23:16.134 "req_id": 1 00:23:16.134 } 00:23:16.134 Got JSON-RPC error response 00:23:16.134 response: 00:23:16.134 { 00:23:16.134 "code": -114, 00:23:16.134 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:16.134 } 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.134 request: 00:23:16.134 { 00:23:16.134 "name": "NVMe0", 00:23:16.134 "trtype": "tcp", 00:23:16.134 "traddr": "10.0.0.2", 00:23:16.134 "adrfam": "ipv4", 00:23:16.134 "trsvcid": "4420", 00:23:16.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.134 "hostaddr": "10.0.0.1", 00:23:16.134 "prchk_reftag": false, 00:23:16.134 "prchk_guard": false, 00:23:16.134 "hdgst": false, 00:23:16.134 "ddgst": false, 00:23:16.134 "multipath": "disable", 00:23:16.134 "allow_unrecognized_csi": false, 00:23:16.134 "method": "bdev_nvme_attach_controller", 00:23:16.134 "req_id": 1 00:23:16.134 } 00:23:16.134 Got JSON-RPC error response 00:23:16.134 response: 00:23:16.134 { 00:23:16.134 "code": -114, 00:23:16.134 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:16.134 } 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.134 request: 00:23:16.134 { 00:23:16.134 "name": "NVMe0", 00:23:16.134 "trtype": "tcp", 00:23:16.134 "traddr": "10.0.0.2", 00:23:16.134 "adrfam": "ipv4", 00:23:16.134 "trsvcid": "4420", 00:23:16.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.134 "hostaddr": "10.0.0.1", 00:23:16.134 "prchk_reftag": false, 00:23:16.134 "prchk_guard": false, 00:23:16.134 "hdgst": false, 00:23:16.134 "ddgst": false, 00:23:16.134 "multipath": "failover", 00:23:16.134 "allow_unrecognized_csi": false, 00:23:16.134 "method": "bdev_nvme_attach_controller", 00:23:16.134 "req_id": 1 00:23:16.134 } 00:23:16.134 Got JSON-RPC error response 00:23:16.134 response: 00:23:16.134 { 00:23:16.134 "code": -114, 00:23:16.134 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:16.134 } 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:16.134 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:16.135 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:16.135 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.135 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.393 NVMe0n1 00:23:16.393 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.393 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:16.393 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.393 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.393 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.393 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:16.393 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.393 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.652 00:23:16.652 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.652 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.652 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:16.652 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.652 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.652 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.652 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:16.652 07:32:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:17.591 { 00:23:17.591 "results": [ 00:23:17.591 { 00:23:17.591 "job": "NVMe0n1", 00:23:17.591 "core_mask": "0x1", 00:23:17.591 "workload": "write", 00:23:17.591 "status": "finished", 00:23:17.591 "queue_depth": 128, 00:23:17.591 "io_size": 4096, 00:23:17.591 "runtime": 1.008012, 00:23:17.591 "iops": 24444.153442617746, 00:23:17.591 "mibps": 95.48497438522557, 00:23:17.591 "io_failed": 0, 00:23:17.591 "io_timeout": 0, 00:23:17.591 "avg_latency_us": 5229.853632975719, 00:23:17.591 "min_latency_us": 4986.434782608696, 00:23:17.591 "max_latency_us": 12993.224347826086 00:23:17.591 } 00:23:17.591 ], 00:23:17.591 "core_count": 1 00:23:17.591 } 00:23:17.591 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:17.591 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.591 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.591 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.591 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:17.591 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 806736 00:23:17.591 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 806736 ']' 00:23:17.591 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 806736 00:23:17.591 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:17.591 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.591 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 806736 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 806736' 00:23:17.849 killing process with pid 806736 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 806736 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 806736 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:17.849 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:17.849 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:17.849 [2024-11-26 07:32:43.554516] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:23:17.849 [2024-11-26 07:32:43.554561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806736 ] 00:23:17.849 [2024-11-26 07:32:43.617930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.849 [2024-11-26 07:32:43.659850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.849 [2024-11-26 07:32:44.487661] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 2408b390-81a3-48ef-be86-67ef0b1cf19e already exists 00:23:17.850 [2024-11-26 07:32:44.487690] bdev.c:7832:bdev_register: *ERROR*: Unable to add uuid:2408b390-81a3-48ef-be86-67ef0b1cf19e alias for bdev NVMe1n1 00:23:17.850 [2024-11-26 07:32:44.487698] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:17.850 Running I/O for 1 seconds... 00:23:17.850 24385.00 IOPS, 95.25 MiB/s 00:23:17.850 Latency(us) 00:23:17.850 [2024-11-26T06:32:45.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.850 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:17.850 NVMe0n1 : 1.01 24444.15 95.48 0.00 0.00 5229.85 4986.43 12993.22 00:23:17.850 [2024-11-26T06:32:45.950Z] =================================================================================================================== 00:23:17.850 [2024-11-26T06:32:45.950Z] Total : 24444.15 95.48 0.00 0.00 5229.85 4986.43 12993.22 00:23:17.850 Received shutdown signal, test time was about 1.000000 seconds 00:23:17.850 00:23:17.850 Latency(us) 00:23:17.850 [2024-11-26T06:32:45.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.850 [2024-11-26T06:32:45.950Z] =================================================================================================================== 00:23:17.850 [2024-11-26T06:32:45.950Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.850 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:17.850 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:17.850 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:17.850 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:17.850 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:17.850 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:17.850 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.850 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:17.850 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.850 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.850 rmmod nvme_tcp 00:23:17.850 rmmod nvme_fabrics 00:23:17.850 rmmod nvme_keyring 00:23:18.108 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:18.109 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:18.109 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:18.109 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 806715 ']' 00:23:18.109 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 806715 00:23:18.109 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 806715 ']' 00:23:18.109 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 806715 00:23:18.109 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:18.109 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.109 07:32:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 806715 00:23:18.109 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:18.109 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:18.109 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 806715' 00:23:18.109 killing process with pid 806715 00:23:18.109 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 806715 00:23:18.109 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 806715 00:23:18.367 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:18.368 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:18.368 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:18.368 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:18.368 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:18.368 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:18.368 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:18.368 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:18.368 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:18.368 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.368 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.368 07:32:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.271 07:32:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:20.271 00:23:20.271 real 0m11.154s 00:23:20.271 user 0m13.040s 00:23:20.271 sys 0m5.038s 00:23:20.271 07:32:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.271 07:32:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.271 ************************************ 00:23:20.271 END TEST nvmf_multicontroller 00:23:20.271 ************************************ 00:23:20.271 07:32:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:20.271 07:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:20.271 07:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.271 07:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.271 ************************************ 00:23:20.271 START TEST nvmf_aer 00:23:20.271 ************************************ 00:23:20.271 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:20.530 * Looking for test storage... 00:23:20.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:20.530 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:20.530 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:20.530 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:20.530 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:20.530 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:20.530 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:20.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.531 --rc genhtml_branch_coverage=1 00:23:20.531 --rc genhtml_function_coverage=1 00:23:20.531 --rc genhtml_legend=1 00:23:20.531 --rc geninfo_all_blocks=1 00:23:20.531 --rc geninfo_unexecuted_blocks=1 00:23:20.531 00:23:20.531 ' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:20.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.531 --rc genhtml_branch_coverage=1 00:23:20.531 --rc genhtml_function_coverage=1 00:23:20.531 --rc genhtml_legend=1 00:23:20.531 --rc geninfo_all_blocks=1 00:23:20.531 --rc geninfo_unexecuted_blocks=1 00:23:20.531 00:23:20.531 ' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:20.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.531 --rc genhtml_branch_coverage=1 00:23:20.531 --rc genhtml_function_coverage=1 00:23:20.531 --rc genhtml_legend=1 00:23:20.531 --rc geninfo_all_blocks=1 00:23:20.531 --rc geninfo_unexecuted_blocks=1 00:23:20.531 00:23:20.531 ' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:20.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.531 --rc genhtml_branch_coverage=1 00:23:20.531 --rc genhtml_function_coverage=1 00:23:20.531 --rc genhtml_legend=1 00:23:20.531 --rc geninfo_all_blocks=1 00:23:20.531 --rc geninfo_unexecuted_blocks=1 00:23:20.531 00:23:20.531 ' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:20.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:20.531 07:32:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.103 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.103 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:27.103 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:27.104 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:27.104 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:27.104 Found net devices under 0000:86:00.0: cvl_0_0 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:27.104 Found net devices under 0000:86:00.1: cvl_0_1 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.104 07:32:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:27.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:23:27.104 00:23:27.104 --- 10.0.0.2 ping statistics --- 00:23:27.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.104 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:27.104 00:23:27.104 --- 10.0.0.1 ping statistics --- 00:23:27.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.104 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.104 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=810725 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 810725 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 810725 ']' 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.105 [2024-11-26 07:32:54.270293] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:23:27.105 [2024-11-26 07:32:54.270335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.105 [2024-11-26 07:32:54.335230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.105 [2024-11-26 07:32:54.379127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.105 [2024-11-26 07:32:54.379165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.105 [2024-11-26 07:32:54.379173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.105 [2024-11-26 07:32:54.379179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.105 [2024-11-26 07:32:54.379184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.105 [2024-11-26 07:32:54.380598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.105 [2024-11-26 07:32:54.380701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.105 [2024-11-26 07:32:54.380817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.105 [2024-11-26 07:32:54.380818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.105 [2024-11-26 07:32:54.518245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.105 Malloc0 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.105 [2024-11-26 07:32:54.582444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.105 [ 00:23:27.105 { 00:23:27.105 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:27.105 "subtype": "Discovery", 00:23:27.105 "listen_addresses": [], 00:23:27.105 "allow_any_host": true, 00:23:27.105 "hosts": [] 00:23:27.105 }, 00:23:27.105 { 00:23:27.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.105 "subtype": "NVMe", 00:23:27.105 "listen_addresses": [ 00:23:27.105 { 00:23:27.105 "trtype": "TCP", 00:23:27.105 "adrfam": "IPv4", 00:23:27.105 "traddr": "10.0.0.2", 00:23:27.105 "trsvcid": "4420" 00:23:27.105 } 00:23:27.105 ], 00:23:27.105 "allow_any_host": true, 00:23:27.105 "hosts": [], 00:23:27.105 "serial_number": "SPDK00000000000001", 00:23:27.105 "model_number": "SPDK bdev Controller", 00:23:27.105 "max_namespaces": 2, 00:23:27.105 "min_cntlid": 1, 00:23:27.105 "max_cntlid": 65519, 00:23:27.105 "namespaces": [ 00:23:27.105 { 00:23:27.105 "nsid": 1, 00:23:27.105 "bdev_name": "Malloc0", 00:23:27.105 "name": "Malloc0", 00:23:27.105 "nguid": "117F8EEF8B2F449893ECB1F66440FDC9", 00:23:27.105 "uuid": "117f8eef-8b2f-4498-93ec-b1f66440fdc9" 00:23:27.105 } 00:23:27.105 ] 00:23:27.105 } 00:23:27.105 ] 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=810753 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.105 Malloc1 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.105 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.105 Asynchronous Event Request test 00:23:27.105 Attaching to 10.0.0.2 00:23:27.105 Attached to 10.0.0.2 00:23:27.105 Registering asynchronous event callbacks... 00:23:27.105 Starting namespace attribute notice tests for all controllers... 00:23:27.105 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:27.105 aer_cb - Changed Namespace 00:23:27.105 Cleaning up... 00:23:27.105 [ 00:23:27.105 { 00:23:27.105 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:27.105 "subtype": "Discovery", 00:23:27.105 "listen_addresses": [], 00:23:27.105 "allow_any_host": true, 00:23:27.105 "hosts": [] 00:23:27.105 }, 00:23:27.105 { 00:23:27.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.105 "subtype": "NVMe", 00:23:27.105 "listen_addresses": [ 00:23:27.105 { 00:23:27.105 "trtype": "TCP", 00:23:27.105 "adrfam": "IPv4", 00:23:27.105 "traddr": "10.0.0.2", 00:23:27.105 "trsvcid": "4420" 00:23:27.105 } 00:23:27.105 ], 00:23:27.105 "allow_any_host": true, 00:23:27.105 "hosts": [], 00:23:27.105 "serial_number": "SPDK00000000000001", 00:23:27.105 "model_number": "SPDK bdev Controller", 00:23:27.105 "max_namespaces": 2, 00:23:27.105 "min_cntlid": 1, 00:23:27.105 "max_cntlid": 65519, 00:23:27.105 "namespaces": [ 00:23:27.106 { 00:23:27.106 "nsid": 1, 00:23:27.106 "bdev_name": "Malloc0", 00:23:27.106 "name": "Malloc0", 00:23:27.106 "nguid": "117F8EEF8B2F449893ECB1F66440FDC9", 00:23:27.106 "uuid": "117f8eef-8b2f-4498-93ec-b1f66440fdc9" 00:23:27.106 }, 00:23:27.106 { 00:23:27.106 "nsid": 2, 00:23:27.106 "bdev_name": "Malloc1", 00:23:27.106 "name": "Malloc1", 00:23:27.106 "nguid": "AB4C44F8D9DA4B83BFB3F1678969E262", 00:23:27.106 "uuid": "ab4c44f8-d9da-4b83-bfb3-f1678969e262" 00:23:27.106 } 00:23:27.106 ] 00:23:27.106 } 00:23:27.106 ] 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 810753 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.106 07:32:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.106 rmmod nvme_tcp 00:23:27.106 rmmod nvme_fabrics 00:23:27.106 rmmod nvme_keyring 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 810725 ']' 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 810725 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 810725 ']' 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 810725 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 810725 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 810725' 00:23:27.106 killing process with pid 810725 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 810725 00:23:27.106 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 810725 00:23:27.365 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.366 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.366 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.366 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:27.366 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:27.366 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.366 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.366 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.366 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.366 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.366 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.366 07:32:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.270 07:32:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.270 00:23:29.270 real 0m8.942s 00:23:29.270 user 0m5.087s 00:23:29.270 sys 0m4.606s 00:23:29.270 07:32:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.270 07:32:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.270 ************************************ 00:23:29.270 END TEST nvmf_aer 00:23:29.270 ************************************ 00:23:29.270 07:32:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:29.270 07:32:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.270 07:32:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.270 07:32:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.530 ************************************ 00:23:29.530 START TEST nvmf_async_init 00:23:29.530 ************************************ 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:29.530 * Looking for test storage... 00:23:29.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:29.530 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:29.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.531 --rc genhtml_branch_coverage=1 00:23:29.531 --rc genhtml_function_coverage=1 00:23:29.531 --rc genhtml_legend=1 00:23:29.531 --rc geninfo_all_blocks=1 00:23:29.531 --rc geninfo_unexecuted_blocks=1 00:23:29.531 00:23:29.531 ' 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:29.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.531 --rc genhtml_branch_coverage=1 00:23:29.531 --rc genhtml_function_coverage=1 00:23:29.531 --rc genhtml_legend=1 00:23:29.531 --rc geninfo_all_blocks=1 00:23:29.531 --rc geninfo_unexecuted_blocks=1 00:23:29.531 00:23:29.531 ' 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:29.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.531 --rc genhtml_branch_coverage=1 00:23:29.531 --rc genhtml_function_coverage=1 00:23:29.531 --rc genhtml_legend=1 00:23:29.531 --rc geninfo_all_blocks=1 00:23:29.531 --rc geninfo_unexecuted_blocks=1 00:23:29.531 00:23:29.531 ' 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:29.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.531 --rc genhtml_branch_coverage=1 00:23:29.531 --rc genhtml_function_coverage=1 00:23:29.531 --rc genhtml_legend=1 00:23:29.531 --rc geninfo_all_blocks=1 00:23:29.531 --rc geninfo_unexecuted_blocks=1 00:23:29.531 00:23:29.531 ' 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=010f10627f1c4a359dda97b2fc1d0a1c 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.531 07:32:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.804 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:34.805 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:34.805 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:34.805 Found net devices under 0000:86:00.0: cvl_0_0 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:34.805 Found net devices under 0000:86:00.1: cvl_0_1 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:34.805 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:35.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:23:35.064 00:23:35.064 --- 10.0.0.2 ping statistics --- 00:23:35.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.064 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:23:35.064 00:23:35.064 --- 10.0.0.1 ping statistics --- 00:23:35.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.064 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.064 07:33:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.064 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=814399 00:23:35.064 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 814399 00:23:35.064 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 814399 ']' 00:23:35.064 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.064 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.064 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.064 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.064 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.064 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:35.064 [2024-11-26 07:33:03.053499] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:23:35.064 [2024-11-26 07:33:03.053544] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.064 [2024-11-26 07:33:03.119344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.323 [2024-11-26 07:33:03.161283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.323 [2024-11-26 07:33:03.161318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.323 [2024-11-26 07:33:03.161325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.323 [2024-11-26 07:33:03.161336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.323 [2024-11-26 07:33:03.161341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.323 [2024-11-26 07:33:03.161909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.323 [2024-11-26 07:33:03.297218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.323 null0 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 010f10627f1c4a359dda97b2fc1d0a1c 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.323 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:35.324 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.324 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.324 [2024-11-26 07:33:03.337461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.324 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.324 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:35.324 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.324 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.583 nvme0n1 00:23:35.583 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.583 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:35.583 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.583 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.583 [ 00:23:35.583 { 00:23:35.583 "name": "nvme0n1", 00:23:35.583 "aliases": [ 00:23:35.583 "010f1062-7f1c-4a35-9dda-97b2fc1d0a1c" 00:23:35.583 ], 00:23:35.583 "product_name": "NVMe disk", 00:23:35.583 "block_size": 512, 00:23:35.583 "num_blocks": 2097152, 00:23:35.583 "uuid": "010f1062-7f1c-4a35-9dda-97b2fc1d0a1c", 00:23:35.583 "numa_id": 1, 00:23:35.583 "assigned_rate_limits": { 00:23:35.583 "rw_ios_per_sec": 0, 00:23:35.583 "rw_mbytes_per_sec": 0, 00:23:35.583 "r_mbytes_per_sec": 0, 00:23:35.583 "w_mbytes_per_sec": 0 00:23:35.583 }, 00:23:35.583 "claimed": false, 00:23:35.583 "zoned": false, 00:23:35.583 "supported_io_types": { 00:23:35.583 "read": true, 00:23:35.583 "write": true, 00:23:35.583 "unmap": false, 00:23:35.583 "flush": true, 00:23:35.583 "reset": true, 00:23:35.583 "nvme_admin": true, 00:23:35.583 "nvme_io": true, 00:23:35.583 "nvme_io_md": false, 00:23:35.583 "write_zeroes": true, 00:23:35.583 "zcopy": false, 00:23:35.583 "get_zone_info": false, 00:23:35.583 "zone_management": false, 00:23:35.583 "zone_append": false, 00:23:35.583 "compare": true, 00:23:35.583 "compare_and_write": true, 00:23:35.583 "abort": true, 00:23:35.583 "seek_hole": false, 00:23:35.583 "seek_data": false, 00:23:35.583 "copy": true, 00:23:35.583 "nvme_iov_md": false 00:23:35.583 }, 00:23:35.583 "memory_domains": [ 00:23:35.583 { 00:23:35.583 "dma_device_id": "system", 00:23:35.583 "dma_device_type": 1 00:23:35.583 } 00:23:35.583 ], 00:23:35.583 "driver_specific": { 00:23:35.583 "nvme": [ 00:23:35.583 { 00:23:35.583 "trid": { 00:23:35.583 "trtype": "TCP", 00:23:35.583 "adrfam": "IPv4", 00:23:35.583 "traddr": "10.0.0.2", 00:23:35.583 "trsvcid": "4420", 00:23:35.583 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:35.583 }, 00:23:35.583 "ctrlr_data": { 00:23:35.583 "cntlid": 1, 00:23:35.583 "vendor_id": "0x8086", 00:23:35.583 "model_number": "SPDK bdev Controller", 00:23:35.583 "serial_number": "00000000000000000000", 00:23:35.583 "firmware_revision": "25.01", 00:23:35.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.583 "oacs": { 00:23:35.583 "security": 0, 00:23:35.583 "format": 0, 00:23:35.583 "firmware": 0, 00:23:35.583 "ns_manage": 0 00:23:35.583 }, 00:23:35.583 "multi_ctrlr": true, 00:23:35.583 "ana_reporting": false 00:23:35.583 }, 00:23:35.583 "vs": { 00:23:35.583 "nvme_version": "1.3" 00:23:35.583 }, 00:23:35.583 "ns_data": { 00:23:35.583 "id": 1, 00:23:35.583 "can_share": true 00:23:35.583 } 00:23:35.583 } 00:23:35.583 ], 00:23:35.583 "mp_policy": "active_passive" 00:23:35.583 } 00:23:35.583 } 00:23:35.583 ] 00:23:35.583 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.583 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:35.583 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.583 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.583 [2024-11-26 07:33:03.586148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:35.583 [2024-11-26 07:33:03.586207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1617220 (9): Bad file descriptor 00:23:35.842 [2024-11-26 07:33:03.718033] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:35.842 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.842 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:35.842 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.842 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.842 [ 00:23:35.842 { 00:23:35.842 "name": "nvme0n1", 00:23:35.842 "aliases": [ 00:23:35.842 "010f1062-7f1c-4a35-9dda-97b2fc1d0a1c" 00:23:35.842 ], 00:23:35.842 "product_name": "NVMe disk", 00:23:35.842 "block_size": 512, 00:23:35.842 "num_blocks": 2097152, 00:23:35.842 "uuid": "010f1062-7f1c-4a35-9dda-97b2fc1d0a1c", 00:23:35.842 "numa_id": 1, 00:23:35.842 "assigned_rate_limits": { 00:23:35.842 "rw_ios_per_sec": 0, 00:23:35.842 "rw_mbytes_per_sec": 0, 00:23:35.842 "r_mbytes_per_sec": 0, 00:23:35.842 "w_mbytes_per_sec": 0 00:23:35.842 }, 00:23:35.842 "claimed": false, 00:23:35.842 "zoned": false, 00:23:35.842 "supported_io_types": { 00:23:35.842 "read": true, 00:23:35.842 "write": true, 00:23:35.842 "unmap": false, 00:23:35.842 "flush": true, 00:23:35.842 "reset": true, 00:23:35.842 "nvme_admin": true, 00:23:35.843 "nvme_io": true, 00:23:35.843 "nvme_io_md": false, 00:23:35.843 "write_zeroes": true, 00:23:35.843 "zcopy": false, 00:23:35.843 "get_zone_info": false, 00:23:35.843 "zone_management": false, 00:23:35.843 "zone_append": false, 00:23:35.843 "compare": true, 00:23:35.843 "compare_and_write": true, 00:23:35.843 "abort": true, 00:23:35.843 "seek_hole": false, 00:23:35.843 "seek_data": false, 00:23:35.843 "copy": true, 00:23:35.843 "nvme_iov_md": false 00:23:35.843 }, 00:23:35.843 "memory_domains": [ 00:23:35.843 { 00:23:35.843 "dma_device_id": "system", 00:23:35.843 "dma_device_type": 1 00:23:35.843 } 00:23:35.843 ], 00:23:35.843 "driver_specific": { 00:23:35.843 "nvme": [ 00:23:35.843 { 00:23:35.843 "trid": { 00:23:35.843 "trtype": "TCP", 00:23:35.843 "adrfam": "IPv4", 00:23:35.843 "traddr": "10.0.0.2", 00:23:35.843 "trsvcid": "4420", 00:23:35.843 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:35.843 }, 00:23:35.843 "ctrlr_data": { 00:23:35.843 "cntlid": 2, 00:23:35.843 "vendor_id": "0x8086", 00:23:35.843 "model_number": "SPDK bdev Controller", 00:23:35.843 "serial_number": "00000000000000000000", 00:23:35.843 "firmware_revision": "25.01", 00:23:35.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.843 "oacs": { 00:23:35.843 "security": 0, 00:23:35.843 "format": 0, 00:23:35.843 "firmware": 0, 00:23:35.843 "ns_manage": 0 00:23:35.843 }, 00:23:35.843 "multi_ctrlr": true, 00:23:35.843 "ana_reporting": false 00:23:35.843 }, 00:23:35.843 "vs": { 00:23:35.843 "nvme_version": "1.3" 00:23:35.843 }, 00:23:35.843 "ns_data": { 00:23:35.843 "id": 1, 00:23:35.843 "can_share": true 00:23:35.843 } 00:23:35.843 } 00:23:35.843 ], 00:23:35.843 "mp_policy": "active_passive" 00:23:35.843 } 00:23:35.843 } 00:23:35.843 ] 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.57AyDW32zx 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.57AyDW32zx 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.57AyDW32zx 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 [2024-11-26 07:33:03.778743] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.843 [2024-11-26 07:33:03.778843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 [2024-11-26 07:33:03.794794] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.843 nvme0n1 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 [ 00:23:35.843 { 00:23:35.843 "name": "nvme0n1", 00:23:35.843 "aliases": [ 00:23:35.843 "010f1062-7f1c-4a35-9dda-97b2fc1d0a1c" 00:23:35.843 ], 00:23:35.843 "product_name": "NVMe disk", 00:23:35.843 "block_size": 512, 00:23:35.843 "num_blocks": 2097152, 00:23:35.843 "uuid": "010f1062-7f1c-4a35-9dda-97b2fc1d0a1c", 00:23:35.843 "numa_id": 1, 00:23:35.843 "assigned_rate_limits": { 00:23:35.843 "rw_ios_per_sec": 0, 00:23:35.843 "rw_mbytes_per_sec": 0, 00:23:35.843 "r_mbytes_per_sec": 0, 00:23:35.843 "w_mbytes_per_sec": 0 00:23:35.843 }, 00:23:35.843 "claimed": false, 00:23:35.843 "zoned": false, 00:23:35.843 "supported_io_types": { 00:23:35.843 "read": true, 00:23:35.843 "write": true, 00:23:35.843 "unmap": false, 00:23:35.843 "flush": true, 00:23:35.843 "reset": true, 00:23:35.843 "nvme_admin": true, 00:23:35.843 "nvme_io": true, 00:23:35.843 "nvme_io_md": false, 00:23:35.843 "write_zeroes": true, 00:23:35.843 "zcopy": false, 00:23:35.843 "get_zone_info": false, 00:23:35.843 "zone_management": false, 00:23:35.843 "zone_append": false, 00:23:35.843 "compare": true, 00:23:35.843 "compare_and_write": true, 00:23:35.843 "abort": true, 00:23:35.843 "seek_hole": false, 00:23:35.843 "seek_data": false, 00:23:35.843 "copy": true, 00:23:35.843 "nvme_iov_md": false 00:23:35.843 }, 00:23:35.843 "memory_domains": [ 00:23:35.843 { 00:23:35.843 "dma_device_id": "system", 00:23:35.843 "dma_device_type": 1 00:23:35.843 } 00:23:35.843 ], 00:23:35.843 "driver_specific": { 00:23:35.843 "nvme": [ 00:23:35.843 { 00:23:35.843 "trid": { 00:23:35.843 "trtype": "TCP", 00:23:35.843 "adrfam": "IPv4", 00:23:35.843 "traddr": "10.0.0.2", 00:23:35.843 "trsvcid": "4421", 00:23:35.843 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:35.843 }, 00:23:35.843 "ctrlr_data": { 00:23:35.843 "cntlid": 3, 00:23:35.843 "vendor_id": "0x8086", 00:23:35.843 "model_number": "SPDK bdev Controller", 00:23:35.843 "serial_number": "00000000000000000000", 00:23:35.843 "firmware_revision": "25.01", 00:23:35.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.843 "oacs": { 00:23:35.843 "security": 0, 00:23:35.843 "format": 0, 00:23:35.843 "firmware": 0, 00:23:35.843 "ns_manage": 0 00:23:35.843 }, 00:23:35.843 "multi_ctrlr": true, 00:23:35.843 "ana_reporting": false 00:23:35.843 }, 00:23:35.843 "vs": { 00:23:35.843 "nvme_version": "1.3" 00:23:35.843 }, 00:23:35.843 "ns_data": { 00:23:35.843 "id": 1, 00:23:35.843 "can_share": true 00:23:35.843 } 00:23:35.843 } 00:23:35.843 ], 00:23:35.843 "mp_policy": "active_passive" 00:23:35.843 } 00:23:35.843 } 00:23:35.843 ] 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.843 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.57AyDW32zx 00:23:35.844 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:35.844 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:35.844 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:35.844 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:35.844 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:35.844 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:35.844 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:35.844 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:35.844 rmmod nvme_tcp 00:23:35.844 rmmod nvme_fabrics 00:23:35.844 rmmod nvme_keyring 00:23:36.102 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.102 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:36.102 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:36.102 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 814399 ']' 00:23:36.102 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 814399 00:23:36.102 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 814399 ']' 00:23:36.102 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 814399 00:23:36.102 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:36.102 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.102 07:33:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 814399 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 814399' 00:23:36.102 killing process with pid 814399 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 814399 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 814399 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.102 07:33:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:38.640 00:23:38.640 real 0m8.846s 00:23:38.640 user 0m2.819s 00:23:38.640 sys 0m4.400s 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.640 ************************************ 00:23:38.640 END TEST nvmf_async_init 00:23:38.640 ************************************ 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.640 ************************************ 00:23:38.640 START TEST dma 00:23:38.640 ************************************ 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:38.640 * Looking for test storage... 00:23:38.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.640 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:38.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.641 --rc genhtml_branch_coverage=1 00:23:38.641 --rc genhtml_function_coverage=1 00:23:38.641 --rc genhtml_legend=1 00:23:38.641 --rc geninfo_all_blocks=1 00:23:38.641 --rc geninfo_unexecuted_blocks=1 00:23:38.641 00:23:38.641 ' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:38.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.641 --rc genhtml_branch_coverage=1 00:23:38.641 --rc genhtml_function_coverage=1 00:23:38.641 --rc genhtml_legend=1 00:23:38.641 --rc geninfo_all_blocks=1 00:23:38.641 --rc geninfo_unexecuted_blocks=1 00:23:38.641 00:23:38.641 ' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:38.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.641 --rc genhtml_branch_coverage=1 00:23:38.641 --rc genhtml_function_coverage=1 00:23:38.641 --rc genhtml_legend=1 00:23:38.641 --rc geninfo_all_blocks=1 00:23:38.641 --rc geninfo_unexecuted_blocks=1 00:23:38.641 00:23:38.641 ' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:38.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.641 --rc genhtml_branch_coverage=1 00:23:38.641 --rc genhtml_function_coverage=1 00:23:38.641 --rc genhtml_legend=1 00:23:38.641 --rc geninfo_all_blocks=1 00:23:38.641 --rc geninfo_unexecuted_blocks=1 00:23:38.641 00:23:38.641 ' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:38.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:38.641 00:23:38.641 real 0m0.205s 00:23:38.641 user 0m0.127s 00:23:38.641 sys 0m0.091s 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:38.641 ************************************ 00:23:38.641 END TEST dma 00:23:38.641 ************************************ 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.641 ************************************ 00:23:38.641 START TEST nvmf_identify 00:23:38.641 ************************************ 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:38.641 * Looking for test storage... 00:23:38.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.641 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:38.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.642 --rc genhtml_branch_coverage=1 00:23:38.642 --rc genhtml_function_coverage=1 00:23:38.642 --rc genhtml_legend=1 00:23:38.642 --rc geninfo_all_blocks=1 00:23:38.642 --rc geninfo_unexecuted_blocks=1 00:23:38.642 00:23:38.642 ' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:38.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.642 --rc genhtml_branch_coverage=1 00:23:38.642 --rc genhtml_function_coverage=1 00:23:38.642 --rc genhtml_legend=1 00:23:38.642 --rc geninfo_all_blocks=1 00:23:38.642 --rc geninfo_unexecuted_blocks=1 00:23:38.642 00:23:38.642 ' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:38.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.642 --rc genhtml_branch_coverage=1 00:23:38.642 --rc genhtml_function_coverage=1 00:23:38.642 --rc genhtml_legend=1 00:23:38.642 --rc geninfo_all_blocks=1 00:23:38.642 --rc geninfo_unexecuted_blocks=1 00:23:38.642 00:23:38.642 ' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:38.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.642 --rc genhtml_branch_coverage=1 00:23:38.642 --rc genhtml_function_coverage=1 00:23:38.642 --rc genhtml_legend=1 00:23:38.642 --rc geninfo_all_blocks=1 00:23:38.642 --rc geninfo_unexecuted_blocks=1 00:23:38.642 00:23:38.642 ' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:38.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.642 07:33:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:43.913 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.913 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:43.913 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:43.913 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:43.913 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:43.913 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:43.913 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:43.913 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:43.914 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:43.914 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:43.914 Found net devices under 0000:86:00.0: cvl_0_0 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:43.914 Found net devices under 0000:86:00.1: cvl_0_1 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:43.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:23:43.914 00:23:43.914 --- 10.0.0.2 ping statistics --- 00:23:43.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.914 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:23:43.914 00:23:43.914 --- 10.0.0.1 ping statistics --- 00:23:43.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.914 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=818425 00:23:43.914 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:43.915 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:43.915 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 818425 00:23:43.915 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 818425 ']' 00:23:43.915 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.915 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.915 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.915 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.915 07:33:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:43.915 [2024-11-26 07:33:11.955979] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:23:43.915 [2024-11-26 07:33:11.956029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.174 [2024-11-26 07:33:12.025449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.174 [2024-11-26 07:33:12.072124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.174 [2024-11-26 07:33:12.072160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.174 [2024-11-26 07:33:12.072167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.174 [2024-11-26 07:33:12.072173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.174 [2024-11-26 07:33:12.072178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.174 [2024-11-26 07:33:12.073753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.174 [2024-11-26 07:33:12.073851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.174 [2024-11-26 07:33:12.073867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.174 [2024-11-26 07:33:12.073876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.174 [2024-11-26 07:33:12.183627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.174 Malloc0 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.174 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.435 [2024-11-26 07:33:12.287104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.435 [ 00:23:44.435 { 00:23:44.435 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:44.435 "subtype": "Discovery", 00:23:44.435 "listen_addresses": [ 00:23:44.435 { 00:23:44.435 "trtype": "TCP", 00:23:44.435 "adrfam": "IPv4", 00:23:44.435 "traddr": "10.0.0.2", 00:23:44.435 "trsvcid": "4420" 00:23:44.435 } 00:23:44.435 ], 00:23:44.435 "allow_any_host": true, 00:23:44.435 "hosts": [] 00:23:44.435 }, 00:23:44.435 { 00:23:44.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.435 "subtype": "NVMe", 00:23:44.435 "listen_addresses": [ 00:23:44.435 { 00:23:44.435 "trtype": "TCP", 00:23:44.435 "adrfam": "IPv4", 00:23:44.435 "traddr": "10.0.0.2", 00:23:44.435 "trsvcid": "4420" 00:23:44.435 } 00:23:44.435 ], 00:23:44.435 "allow_any_host": true, 00:23:44.435 "hosts": [], 00:23:44.435 "serial_number": "SPDK00000000000001", 00:23:44.435 "model_number": "SPDK bdev Controller", 00:23:44.435 "max_namespaces": 32, 00:23:44.435 "min_cntlid": 1, 00:23:44.435 "max_cntlid": 65519, 00:23:44.435 "namespaces": [ 00:23:44.435 { 00:23:44.435 "nsid": 1, 00:23:44.435 "bdev_name": "Malloc0", 00:23:44.435 "name": "Malloc0", 00:23:44.435 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:44.435 "eui64": "ABCDEF0123456789", 00:23:44.435 "uuid": "94dd3680-f05c-4283-b29b-abd39731799f" 00:23:44.435 } 00:23:44.435 ] 00:23:44.435 } 00:23:44.435 ] 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.435 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:44.435 [2024-11-26 07:33:12.341166] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:23:44.435 [2024-11-26 07:33:12.341202] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818630 ] 00:23:44.435 [2024-11-26 07:33:12.380918] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:44.435 [2024-11-26 07:33:12.384966] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:44.435 [2024-11-26 07:33:12.384973] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:44.435 [2024-11-26 07:33:12.384988] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:44.435 [2024-11-26 07:33:12.384998] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:44.435 [2024-11-26 07:33:12.385590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:44.435 [2024-11-26 07:33:12.385621] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa6a690 0 00:23:44.435 [2024-11-26 07:33:12.391962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:44.435 [2024-11-26 07:33:12.391975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:44.435 [2024-11-26 07:33:12.391979] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:44.435 [2024-11-26 07:33:12.391982] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:44.435 [2024-11-26 07:33:12.392016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.435 [2024-11-26 07:33:12.392023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.435 [2024-11-26 07:33:12.392027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6a690) 00:23:44.435 [2024-11-26 07:33:12.392039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:44.435 [2024-11-26 07:33:12.392056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc100, cid 0, qid 0 00:23:44.435 [2024-11-26 07:33:12.399955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.435 [2024-11-26 07:33:12.399963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.435 [2024-11-26 07:33:12.399967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.435 [2024-11-26 07:33:12.399971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc100) on tqpair=0xa6a690 00:23:44.435 [2024-11-26 07:33:12.399982] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:44.435 [2024-11-26 07:33:12.399988] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:44.435 [2024-11-26 07:33:12.399993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:44.435 [2024-11-26 07:33:12.400006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.435 [2024-11-26 07:33:12.400010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.435 [2024-11-26 07:33:12.400013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6a690) 00:23:44.435 [2024-11-26 07:33:12.400021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.435 [2024-11-26 07:33:12.400033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc100, cid 0, qid 0 00:23:44.435 [2024-11-26 07:33:12.400211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.435 [2024-11-26 07:33:12.400217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.435 [2024-11-26 07:33:12.400220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.435 [2024-11-26 07:33:12.400224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc100) on tqpair=0xa6a690 00:23:44.435 [2024-11-26 07:33:12.400228] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:44.435 [2024-11-26 07:33:12.400235] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:44.435 [2024-11-26 07:33:12.400241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.435 [2024-11-26 07:33:12.400245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.435 [2024-11-26 07:33:12.400248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6a690) 00:23:44.436 [2024-11-26 07:33:12.400254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.436 [2024-11-26 07:33:12.400264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc100, cid 0, qid 0 00:23:44.436 [2024-11-26 07:33:12.400330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.436 [2024-11-26 07:33:12.400336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.436 [2024-11-26 07:33:12.400339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc100) on tqpair=0xa6a690 00:23:44.436 [2024-11-26 07:33:12.400346] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:44.436 [2024-11-26 07:33:12.400353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:44.436 [2024-11-26 07:33:12.400362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6a690) 00:23:44.436 [2024-11-26 07:33:12.400374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.436 [2024-11-26 07:33:12.400384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc100, cid 0, qid 0 00:23:44.436 [2024-11-26 07:33:12.400450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.436 [2024-11-26 07:33:12.400456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.436 [2024-11-26 07:33:12.400459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc100) on tqpair=0xa6a690 00:23:44.436 [2024-11-26 07:33:12.400467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:44.436 [2024-11-26 07:33:12.400475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6a690) 00:23:44.436 [2024-11-26 07:33:12.400488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.436 [2024-11-26 07:33:12.400497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc100, cid 0, qid 0 00:23:44.436 [2024-11-26 07:33:12.400559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.436 [2024-11-26 07:33:12.400565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.436 [2024-11-26 07:33:12.400568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc100) on tqpair=0xa6a690 00:23:44.436 [2024-11-26 07:33:12.400576] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:44.436 [2024-11-26 07:33:12.400580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:44.436 [2024-11-26 07:33:12.400588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:44.436 [2024-11-26 07:33:12.400696] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:44.436 [2024-11-26 07:33:12.400700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:44.436 [2024-11-26 07:33:12.400707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6a690) 00:23:44.436 [2024-11-26 07:33:12.400720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.436 [2024-11-26 07:33:12.400729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc100, cid 0, qid 0 00:23:44.436 [2024-11-26 07:33:12.400810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.436 [2024-11-26 07:33:12.400816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.436 [2024-11-26 07:33:12.400820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc100) on tqpair=0xa6a690 00:23:44.436 [2024-11-26 07:33:12.400829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:44.436 [2024-11-26 07:33:12.400837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6a690) 00:23:44.436 [2024-11-26 07:33:12.400850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.436 [2024-11-26 07:33:12.400859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc100, cid 0, qid 0 00:23:44.436 [2024-11-26 07:33:12.400928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.436 [2024-11-26 07:33:12.400933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.436 [2024-11-26 07:33:12.400937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400940] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc100) on tqpair=0xa6a690 00:23:44.436 [2024-11-26 07:33:12.400944] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:44.436 [2024-11-26 07:33:12.400953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:44.436 [2024-11-26 07:33:12.400960] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:44.436 [2024-11-26 07:33:12.400967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:44.436 [2024-11-26 07:33:12.400976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.400979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6a690) 00:23:44.436 [2024-11-26 07:33:12.400985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.436 [2024-11-26 07:33:12.400996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc100, cid 0, qid 0 00:23:44.436 [2024-11-26 07:33:12.401115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.436 [2024-11-26 07:33:12.401121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.436 [2024-11-26 07:33:12.401124] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.401128] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6a690): datao=0, datal=4096, cccid=0 00:23:44.436 [2024-11-26 07:33:12.401132] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xacc100) on tqpair(0xa6a690): expected_datao=0, payload_size=4096 00:23:44.436 [2024-11-26 07:33:12.401136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.401146] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.401150] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.446957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.436 [2024-11-26 07:33:12.446972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.436 [2024-11-26 07:33:12.446976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.446980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc100) on tqpair=0xa6a690 00:23:44.436 [2024-11-26 07:33:12.446988] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:44.436 [2024-11-26 07:33:12.446994] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:44.436 [2024-11-26 07:33:12.446998] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:44.436 [2024-11-26 07:33:12.447011] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:44.436 [2024-11-26 07:33:12.447018] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:44.436 [2024-11-26 07:33:12.447023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:44.436 [2024-11-26 07:33:12.447035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:44.436 [2024-11-26 07:33:12.447041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.447045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.447049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6a690) 00:23:44.436 [2024-11-26 07:33:12.447057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:44.436 [2024-11-26 07:33:12.447070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc100, cid 0, qid 0 00:23:44.436 [2024-11-26 07:33:12.447246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.436 [2024-11-26 07:33:12.447253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.436 [2024-11-26 07:33:12.447258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.447261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc100) on tqpair=0xa6a690 00:23:44.436 [2024-11-26 07:33:12.447267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.447271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.447274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa6a690) 00:23:44.436 [2024-11-26 07:33:12.447281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.436 [2024-11-26 07:33:12.447287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.447291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.447294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa6a690) 00:23:44.436 [2024-11-26 07:33:12.447299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.436 [2024-11-26 07:33:12.447304] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.447308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.436 [2024-11-26 07:33:12.447311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa6a690) 00:23:44.437 [2024-11-26 07:33:12.447317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.437 [2024-11-26 07:33:12.447324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.437 [2024-11-26 07:33:12.447337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.437 [2024-11-26 07:33:12.447341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:44.437 [2024-11-26 07:33:12.447349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:44.437 [2024-11-26 07:33:12.447355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6a690) 00:23:44.437 [2024-11-26 07:33:12.447367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.437 [2024-11-26 07:33:12.447378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc100, cid 0, qid 0 00:23:44.437 [2024-11-26 07:33:12.447383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc280, cid 1, qid 0 00:23:44.437 [2024-11-26 07:33:12.447387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc400, cid 2, qid 0 00:23:44.437 [2024-11-26 07:33:12.447391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.437 [2024-11-26 07:33:12.447395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc700, cid 4, qid 0 00:23:44.437 [2024-11-26 07:33:12.447497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.437 [2024-11-26 07:33:12.447504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.437 [2024-11-26 07:33:12.447507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc700) on tqpair=0xa6a690 00:23:44.437 [2024-11-26 07:33:12.447517] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:44.437 [2024-11-26 07:33:12.447522] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:44.437 [2024-11-26 07:33:12.447533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6a690) 00:23:44.437 [2024-11-26 07:33:12.447542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.437 [2024-11-26 07:33:12.447552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc700, cid 4, qid 0 00:23:44.437 [2024-11-26 07:33:12.447668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.437 [2024-11-26 07:33:12.447674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.437 [2024-11-26 07:33:12.447678] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447681] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6a690): datao=0, datal=4096, cccid=4 00:23:44.437 [2024-11-26 07:33:12.447685] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xacc700) on tqpair(0xa6a690): expected_datao=0, payload_size=4096 00:23:44.437 [2024-11-26 07:33:12.447689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447695] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447698] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.437 [2024-11-26 07:33:12.447714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.437 [2024-11-26 07:33:12.447717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc700) on tqpair=0xa6a690 00:23:44.437 [2024-11-26 07:33:12.447731] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:44.437 [2024-11-26 07:33:12.447750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6a690) 00:23:44.437 [2024-11-26 07:33:12.447760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.437 [2024-11-26 07:33:12.447766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa6a690) 00:23:44.437 [2024-11-26 07:33:12.447780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.437 [2024-11-26 07:33:12.447793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc700, cid 4, qid 0 00:23:44.437 [2024-11-26 07:33:12.447798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc880, cid 5, qid 0 00:23:44.437 [2024-11-26 07:33:12.447937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.437 [2024-11-26 07:33:12.447943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.437 [2024-11-26 07:33:12.447960] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447964] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6a690): datao=0, datal=1024, cccid=4 00:23:44.437 [2024-11-26 07:33:12.447968] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xacc700) on tqpair(0xa6a690): expected_datao=0, payload_size=1024 00:23:44.437 [2024-11-26 07:33:12.447972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447978] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447981] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.437 [2024-11-26 07:33:12.447991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.437 [2024-11-26 07:33:12.447994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.447998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc880) on tqpair=0xa6a690 00:23:44.437 [2024-11-26 07:33:12.489133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.437 [2024-11-26 07:33:12.489144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.437 [2024-11-26 07:33:12.489147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.489150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc700) on tqpair=0xa6a690 00:23:44.437 [2024-11-26 07:33:12.489162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.489165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6a690) 00:23:44.437 [2024-11-26 07:33:12.489173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.437 [2024-11-26 07:33:12.489190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc700, cid 4, qid 0 00:23:44.437 [2024-11-26 07:33:12.489269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.437 [2024-11-26 07:33:12.489275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.437 [2024-11-26 07:33:12.489278] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.489281] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6a690): datao=0, datal=3072, cccid=4 00:23:44.437 [2024-11-26 07:33:12.489285] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xacc700) on tqpair(0xa6a690): expected_datao=0, payload_size=3072 00:23:44.437 [2024-11-26 07:33:12.489289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.489302] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.437 [2024-11-26 07:33:12.489306] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.702 [2024-11-26 07:33:12.532959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.702 [2024-11-26 07:33:12.532969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.702 [2024-11-26 07:33:12.532972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.702 [2024-11-26 07:33:12.532975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc700) on tqpair=0xa6a690 00:23:44.702 [2024-11-26 07:33:12.532985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.702 [2024-11-26 07:33:12.532992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa6a690) 00:23:44.702 [2024-11-26 07:33:12.532999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.702 [2024-11-26 07:33:12.533013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc700, cid 4, qid 0 00:23:44.702 [2024-11-26 07:33:12.533119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.702 [2024-11-26 07:33:12.533125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.702 [2024-11-26 07:33:12.533128] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.702 [2024-11-26 07:33:12.533131] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa6a690): datao=0, datal=8, cccid=4 00:23:44.702 [2024-11-26 07:33:12.533136] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xacc700) on tqpair(0xa6a690): expected_datao=0, payload_size=8 00:23:44.702 [2024-11-26 07:33:12.533140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.702 [2024-11-26 07:33:12.533145] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.702 [2024-11-26 07:33:12.533149] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.702 [2024-11-26 07:33:12.574087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.702 [2024-11-26 07:33:12.574098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.702 [2024-11-26 07:33:12.574102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.702 [2024-11-26 07:33:12.574105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc700) on tqpair=0xa6a690 00:23:44.702 ===================================================== 00:23:44.702 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:44.702 ===================================================== 00:23:44.702 Controller Capabilities/Features 00:23:44.702 ================================ 00:23:44.702 Vendor ID: 0000 00:23:44.702 Subsystem Vendor ID: 0000 00:23:44.702 Serial Number: .................... 00:23:44.702 Model Number: ........................................ 00:23:44.702 Firmware Version: 25.01 00:23:44.702 Recommended Arb Burst: 0 00:23:44.702 IEEE OUI Identifier: 00 00 00 00:23:44.702 Multi-path I/O 00:23:44.702 May have multiple subsystem ports: No 00:23:44.702 May have multiple controllers: No 00:23:44.702 Associated with SR-IOV VF: No 00:23:44.702 Max Data Transfer Size: 131072 00:23:44.702 Max Number of Namespaces: 0 00:23:44.702 Max Number of I/O Queues: 1024 00:23:44.702 NVMe Specification Version (VS): 1.3 00:23:44.702 NVMe Specification Version (Identify): 1.3 00:23:44.702 Maximum Queue Entries: 128 00:23:44.702 Contiguous Queues Required: Yes 00:23:44.702 Arbitration Mechanisms Supported 00:23:44.702 Weighted Round Robin: Not Supported 00:23:44.702 Vendor Specific: Not Supported 00:23:44.702 Reset Timeout: 15000 ms 00:23:44.702 Doorbell Stride: 4 bytes 00:23:44.702 NVM Subsystem Reset: Not Supported 00:23:44.702 Command Sets Supported 00:23:44.702 NVM Command Set: Supported 00:23:44.702 Boot Partition: Not Supported 00:23:44.702 Memory Page Size Minimum: 4096 bytes 00:23:44.702 Memory Page Size Maximum: 4096 bytes 00:23:44.702 Persistent Memory Region: Not Supported 00:23:44.702 Optional Asynchronous Events Supported 00:23:44.702 Namespace Attribute Notices: Not Supported 00:23:44.702 Firmware Activation Notices: Not Supported 00:23:44.702 ANA Change Notices: Not Supported 00:23:44.702 PLE Aggregate Log Change Notices: Not Supported 00:23:44.702 LBA Status Info Alert Notices: Not Supported 00:23:44.702 EGE Aggregate Log Change Notices: Not Supported 00:23:44.702 Normal NVM Subsystem Shutdown event: Not Supported 00:23:44.702 Zone Descriptor Change Notices: Not Supported 00:23:44.702 Discovery Log Change Notices: Supported 00:23:44.702 Controller Attributes 00:23:44.702 128-bit Host Identifier: Not Supported 00:23:44.702 Non-Operational Permissive Mode: Not Supported 00:23:44.702 NVM Sets: Not Supported 00:23:44.702 Read Recovery Levels: Not Supported 00:23:44.702 Endurance Groups: Not Supported 00:23:44.702 Predictable Latency Mode: Not Supported 00:23:44.702 Traffic Based Keep ALive: Not Supported 00:23:44.702 Namespace Granularity: Not Supported 00:23:44.702 SQ Associations: Not Supported 00:23:44.702 UUID List: Not Supported 00:23:44.702 Multi-Domain Subsystem: Not Supported 00:23:44.702 Fixed Capacity Management: Not Supported 00:23:44.702 Variable Capacity Management: Not Supported 00:23:44.702 Delete Endurance Group: Not Supported 00:23:44.702 Delete NVM Set: Not Supported 00:23:44.702 Extended LBA Formats Supported: Not Supported 00:23:44.702 Flexible Data Placement Supported: Not Supported 00:23:44.702 00:23:44.702 Controller Memory Buffer Support 00:23:44.702 ================================ 00:23:44.702 Supported: No 00:23:44.702 00:23:44.702 Persistent Memory Region Support 00:23:44.702 ================================ 00:23:44.703 Supported: No 00:23:44.703 00:23:44.703 Admin Command Set Attributes 00:23:44.703 ============================ 00:23:44.703 Security Send/Receive: Not Supported 00:23:44.703 Format NVM: Not Supported 00:23:44.703 Firmware Activate/Download: Not Supported 00:23:44.703 Namespace Management: Not Supported 00:23:44.703 Device Self-Test: Not Supported 00:23:44.703 Directives: Not Supported 00:23:44.703 NVMe-MI: Not Supported 00:23:44.703 Virtualization Management: Not Supported 00:23:44.703 Doorbell Buffer Config: Not Supported 00:23:44.703 Get LBA Status Capability: Not Supported 00:23:44.703 Command & Feature Lockdown Capability: Not Supported 00:23:44.703 Abort Command Limit: 1 00:23:44.703 Async Event Request Limit: 4 00:23:44.703 Number of Firmware Slots: N/A 00:23:44.703 Firmware Slot 1 Read-Only: N/A 00:23:44.703 Firmware Activation Without Reset: N/A 00:23:44.703 Multiple Update Detection Support: N/A 00:23:44.703 Firmware Update Granularity: No Information Provided 00:23:44.703 Per-Namespace SMART Log: No 00:23:44.703 Asymmetric Namespace Access Log Page: Not Supported 00:23:44.703 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:44.703 Command Effects Log Page: Not Supported 00:23:44.703 Get Log Page Extended Data: Supported 00:23:44.703 Telemetry Log Pages: Not Supported 00:23:44.703 Persistent Event Log Pages: Not Supported 00:23:44.703 Supported Log Pages Log Page: May Support 00:23:44.703 Commands Supported & Effects Log Page: Not Supported 00:23:44.703 Feature Identifiers & Effects Log Page:May Support 00:23:44.703 NVMe-MI Commands & Effects Log Page: May Support 00:23:44.703 Data Area 4 for Telemetry Log: Not Supported 00:23:44.703 Error Log Page Entries Supported: 128 00:23:44.703 Keep Alive: Not Supported 00:23:44.703 00:23:44.703 NVM Command Set Attributes 00:23:44.703 ========================== 00:23:44.703 Submission Queue Entry Size 00:23:44.703 Max: 1 00:23:44.703 Min: 1 00:23:44.703 Completion Queue Entry Size 00:23:44.703 Max: 1 00:23:44.703 Min: 1 00:23:44.703 Number of Namespaces: 0 00:23:44.703 Compare Command: Not Supported 00:23:44.703 Write Uncorrectable Command: Not Supported 00:23:44.703 Dataset Management Command: Not Supported 00:23:44.703 Write Zeroes Command: Not Supported 00:23:44.703 Set Features Save Field: Not Supported 00:23:44.703 Reservations: Not Supported 00:23:44.703 Timestamp: Not Supported 00:23:44.703 Copy: Not Supported 00:23:44.703 Volatile Write Cache: Not Present 00:23:44.703 Atomic Write Unit (Normal): 1 00:23:44.703 Atomic Write Unit (PFail): 1 00:23:44.703 Atomic Compare & Write Unit: 1 00:23:44.703 Fused Compare & Write: Supported 00:23:44.703 Scatter-Gather List 00:23:44.703 SGL Command Set: Supported 00:23:44.703 SGL Keyed: Supported 00:23:44.703 SGL Bit Bucket Descriptor: Not Supported 00:23:44.703 SGL Metadata Pointer: Not Supported 00:23:44.703 Oversized SGL: Not Supported 00:23:44.703 SGL Metadata Address: Not Supported 00:23:44.703 SGL Offset: Supported 00:23:44.703 Transport SGL Data Block: Not Supported 00:23:44.703 Replay Protected Memory Block: Not Supported 00:23:44.703 00:23:44.703 Firmware Slot Information 00:23:44.703 ========================= 00:23:44.703 Active slot: 0 00:23:44.703 00:23:44.703 00:23:44.703 Error Log 00:23:44.703 ========= 00:23:44.703 00:23:44.703 Active Namespaces 00:23:44.703 ================= 00:23:44.703 Discovery Log Page 00:23:44.703 ================== 00:23:44.703 Generation Counter: 2 00:23:44.703 Number of Records: 2 00:23:44.703 Record Format: 0 00:23:44.703 00:23:44.703 Discovery Log Entry 0 00:23:44.703 ---------------------- 00:23:44.703 Transport Type: 3 (TCP) 00:23:44.703 Address Family: 1 (IPv4) 00:23:44.703 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:44.703 Entry Flags: 00:23:44.703 Duplicate Returned Information: 1 00:23:44.703 Explicit Persistent Connection Support for Discovery: 1 00:23:44.703 Transport Requirements: 00:23:44.703 Secure Channel: Not Required 00:23:44.703 Port ID: 0 (0x0000) 00:23:44.703 Controller ID: 65535 (0xffff) 00:23:44.703 Admin Max SQ Size: 128 00:23:44.703 Transport Service Identifier: 4420 00:23:44.703 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:44.703 Transport Address: 10.0.0.2 00:23:44.703 Discovery Log Entry 1 00:23:44.703 ---------------------- 00:23:44.703 Transport Type: 3 (TCP) 00:23:44.703 Address Family: 1 (IPv4) 00:23:44.703 Subsystem Type: 2 (NVM Subsystem) 00:23:44.703 Entry Flags: 00:23:44.703 Duplicate Returned Information: 0 00:23:44.703 Explicit Persistent Connection Support for Discovery: 0 00:23:44.703 Transport Requirements: 00:23:44.703 Secure Channel: Not Required 00:23:44.703 Port ID: 0 (0x0000) 00:23:44.703 Controller ID: 65535 (0xffff) 00:23:44.703 Admin Max SQ Size: 128 00:23:44.703 Transport Service Identifier: 4420 00:23:44.703 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:44.703 Transport Address: 10.0.0.2 [2024-11-26 07:33:12.574191] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:44.703 [2024-11-26 07:33:12.574202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc100) on tqpair=0xa6a690 00:23:44.703 [2024-11-26 07:33:12.574208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.703 [2024-11-26 07:33:12.574213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc280) on tqpair=0xa6a690 00:23:44.703 [2024-11-26 07:33:12.574218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.703 [2024-11-26 07:33:12.574222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc400) on tqpair=0xa6a690 00:23:44.703 [2024-11-26 07:33:12.574226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.703 [2024-11-26 07:33:12.574231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.703 [2024-11-26 07:33:12.574235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.703 [2024-11-26 07:33:12.574245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.703 [2024-11-26 07:33:12.574249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.703 [2024-11-26 07:33:12.574253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.703 [2024-11-26 07:33:12.574260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.703 [2024-11-26 07:33:12.574273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.703 [2024-11-26 07:33:12.574341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.703 [2024-11-26 07:33:12.574348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.703 [2024-11-26 07:33:12.574351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.703 [2024-11-26 07:33:12.574355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.703 [2024-11-26 07:33:12.574361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.703 [2024-11-26 07:33:12.574366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.703 [2024-11-26 07:33:12.574369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.703 [2024-11-26 07:33:12.574375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.703 [2024-11-26 07:33:12.574388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.703 [2024-11-26 07:33:12.574492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.703 [2024-11-26 07:33:12.574497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.703 [2024-11-26 07:33:12.574501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.703 [2024-11-26 07:33:12.574504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.703 [2024-11-26 07:33:12.574508] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:44.703 [2024-11-26 07:33:12.574512] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:44.703 [2024-11-26 07:33:12.574521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.703 [2024-11-26 07:33:12.574525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.703 [2024-11-26 07:33:12.574528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.703 [2024-11-26 07:33:12.574534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.703 [2024-11-26 07:33:12.574543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.703 [2024-11-26 07:33:12.574642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.703 [2024-11-26 07:33:12.574648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.703 [2024-11-26 07:33:12.574651] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.703 [2024-11-26 07:33:12.574655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.703 [2024-11-26 07:33:12.574663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.703 [2024-11-26 07:33:12.574667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.703 [2024-11-26 07:33:12.574670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.574676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.574685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.574748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.574754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.574757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.574760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.574769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.574772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.574776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.574781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.574791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.574894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.574900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.574903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.574907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.574917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.574921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.574924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.574930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.574939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.575046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.575053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.575056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.575068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.575080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.575090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.575197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.575203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.575206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.575218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.575231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.575240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.575316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.575322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.575325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.575337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.575350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.575360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.575448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.575453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.575456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.575468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.575483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.575492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.575599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.575604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.575607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.575619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.575632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.575641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.575749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.575755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.575759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.575770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.575783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.575792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.575856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.575862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.575865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.575877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.575884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.575890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.575899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.576002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.576009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.576012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.576015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.576023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.576027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.576032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.576038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.576047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.576154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.576159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.576162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.576166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.576174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.576178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.576181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.704 [2024-11-26 07:33:12.576187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.704 [2024-11-26 07:33:12.576196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.704 [2024-11-26 07:33:12.576304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.704 [2024-11-26 07:33:12.576310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.704 [2024-11-26 07:33:12.576313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.704 [2024-11-26 07:33:12.576317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.704 [2024-11-26 07:33:12.576325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.705 [2024-11-26 07:33:12.576338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.705 [2024-11-26 07:33:12.576347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.705 [2024-11-26 07:33:12.576408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.705 [2024-11-26 07:33:12.576414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.705 [2024-11-26 07:33:12.576417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.705 [2024-11-26 07:33:12.576429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.705 [2024-11-26 07:33:12.576442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.705 [2024-11-26 07:33:12.576451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.705 [2024-11-26 07:33:12.576557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.705 [2024-11-26 07:33:12.576563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.705 [2024-11-26 07:33:12.576566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.705 [2024-11-26 07:33:12.576577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.705 [2024-11-26 07:33:12.576592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.705 [2024-11-26 07:33:12.576601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.705 [2024-11-26 07:33:12.576707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.705 [2024-11-26 07:33:12.576713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.705 [2024-11-26 07:33:12.576716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.705 [2024-11-26 07:33:12.576728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.705 [2024-11-26 07:33:12.576740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.705 [2024-11-26 07:33:12.576749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.705 [2024-11-26 07:33:12.576858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.705 [2024-11-26 07:33:12.576864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.705 [2024-11-26 07:33:12.576867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.705 [2024-11-26 07:33:12.576879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.576886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.705 [2024-11-26 07:33:12.576892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.705 [2024-11-26 07:33:12.576901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.705 [2024-11-26 07:33:12.580957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.705 [2024-11-26 07:33:12.580965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.705 [2024-11-26 07:33:12.580968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.580971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.705 [2024-11-26 07:33:12.580981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.580985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.580988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa6a690) 00:23:44.705 [2024-11-26 07:33:12.580994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.705 [2024-11-26 07:33:12.581005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xacc580, cid 3, qid 0 00:23:44.705 [2024-11-26 07:33:12.581189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.705 [2024-11-26 07:33:12.581195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.705 [2024-11-26 07:33:12.581198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.581201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xacc580) on tqpair=0xa6a690 00:23:44.705 [2024-11-26 07:33:12.581208] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:23:44.705 00:23:44.705 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:44.705 [2024-11-26 07:33:12.618274] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:23:44.705 [2024-11-26 07:33:12.618313] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818638 ] 00:23:44.705 [2024-11-26 07:33:12.659740] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:44.705 [2024-11-26 07:33:12.659784] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:44.705 [2024-11-26 07:33:12.659789] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:44.705 [2024-11-26 07:33:12.659807] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:44.705 [2024-11-26 07:33:12.659816] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:44.705 [2024-11-26 07:33:12.663211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:44.705 [2024-11-26 07:33:12.663237] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13df690 0 00:23:44.705 [2024-11-26 07:33:12.676955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:44.705 [2024-11-26 07:33:12.676969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:44.705 [2024-11-26 07:33:12.676974] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:44.705 [2024-11-26 07:33:12.676977] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:44.705 [2024-11-26 07:33:12.677003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.677008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.677011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df690) 00:23:44.705 [2024-11-26 07:33:12.677022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:44.705 [2024-11-26 07:33:12.677038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441100, cid 0, qid 0 00:23:44.705 [2024-11-26 07:33:12.683956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.705 [2024-11-26 07:33:12.683964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.705 [2024-11-26 07:33:12.683968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.683972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441100) on tqpair=0x13df690 00:23:44.705 [2024-11-26 07:33:12.683981] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:44.705 [2024-11-26 07:33:12.683987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:44.705 [2024-11-26 07:33:12.683992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:44.705 [2024-11-26 07:33:12.684002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.684006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.684010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df690) 00:23:44.705 [2024-11-26 07:33:12.684016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.705 [2024-11-26 07:33:12.684029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441100, cid 0, qid 0 00:23:44.705 [2024-11-26 07:33:12.684191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.705 [2024-11-26 07:33:12.684196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.705 [2024-11-26 07:33:12.684202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.684206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441100) on tqpair=0x13df690 00:23:44.705 [2024-11-26 07:33:12.684210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:44.705 [2024-11-26 07:33:12.684218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:44.705 [2024-11-26 07:33:12.684225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.684228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.684232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df690) 00:23:44.705 [2024-11-26 07:33:12.684238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.705 [2024-11-26 07:33:12.684248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441100, cid 0, qid 0 00:23:44.705 [2024-11-26 07:33:12.684349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.705 [2024-11-26 07:33:12.684355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.705 [2024-11-26 07:33:12.684358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.705 [2024-11-26 07:33:12.684361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441100) on tqpair=0x13df690 00:23:44.705 [2024-11-26 07:33:12.684366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:44.706 [2024-11-26 07:33:12.684373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:44.706 [2024-11-26 07:33:12.684379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.684382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.684385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df690) 00:23:44.706 [2024-11-26 07:33:12.684391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.706 [2024-11-26 07:33:12.684401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441100, cid 0, qid 0 00:23:44.706 [2024-11-26 07:33:12.684500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.706 [2024-11-26 07:33:12.684505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.706 [2024-11-26 07:33:12.684508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.684512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441100) on tqpair=0x13df690 00:23:44.706 [2024-11-26 07:33:12.684516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:44.706 [2024-11-26 07:33:12.684524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.684528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.684531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df690) 00:23:44.706 [2024-11-26 07:33:12.684537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.706 [2024-11-26 07:33:12.684547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441100, cid 0, qid 0 00:23:44.706 [2024-11-26 07:33:12.684651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.706 [2024-11-26 07:33:12.684657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.706 [2024-11-26 07:33:12.684660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.684663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441100) on tqpair=0x13df690 00:23:44.706 [2024-11-26 07:33:12.684667] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:44.706 [2024-11-26 07:33:12.684673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:44.706 [2024-11-26 07:33:12.684681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:44.706 [2024-11-26 07:33:12.684788] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:44.706 [2024-11-26 07:33:12.684792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:44.706 [2024-11-26 07:33:12.684799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.684802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.684805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df690) 00:23:44.706 [2024-11-26 07:33:12.684811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.706 [2024-11-26 07:33:12.684821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441100, cid 0, qid 0 00:23:44.706 [2024-11-26 07:33:12.684887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.706 [2024-11-26 07:33:12.684892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.706 [2024-11-26 07:33:12.684896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.684899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441100) on tqpair=0x13df690 00:23:44.706 [2024-11-26 07:33:12.684903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:44.706 [2024-11-26 07:33:12.684911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.684914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.684918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df690) 00:23:44.706 [2024-11-26 07:33:12.684923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.706 [2024-11-26 07:33:12.684933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441100, cid 0, qid 0 00:23:44.706 [2024-11-26 07:33:12.685039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.706 [2024-11-26 07:33:12.685045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.706 [2024-11-26 07:33:12.685048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441100) on tqpair=0x13df690 00:23:44.706 [2024-11-26 07:33:12.685056] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:44.706 [2024-11-26 07:33:12.685060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:44.706 [2024-11-26 07:33:12.685067] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:44.706 [2024-11-26 07:33:12.685074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:44.706 [2024-11-26 07:33:12.685082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df690) 00:23:44.706 [2024-11-26 07:33:12.685091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.706 [2024-11-26 07:33:12.685101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441100, cid 0, qid 0 00:23:44.706 [2024-11-26 07:33:12.685201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.706 [2024-11-26 07:33:12.685207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.706 [2024-11-26 07:33:12.685210] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685214] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df690): datao=0, datal=4096, cccid=0 00:23:44.706 [2024-11-26 07:33:12.685218] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1441100) on tqpair(0x13df690): expected_datao=0, payload_size=4096 00:23:44.706 [2024-11-26 07:33:12.685222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685228] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685231] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.706 [2024-11-26 07:33:12.685296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.706 [2024-11-26 07:33:12.685299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441100) on tqpair=0x13df690 00:23:44.706 [2024-11-26 07:33:12.685309] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:44.706 [2024-11-26 07:33:12.685314] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:44.706 [2024-11-26 07:33:12.685318] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:44.706 [2024-11-26 07:33:12.685324] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:44.706 [2024-11-26 07:33:12.685328] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:44.706 [2024-11-26 07:33:12.685332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:44.706 [2024-11-26 07:33:12.685342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:44.706 [2024-11-26 07:33:12.685348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df690) 00:23:44.706 [2024-11-26 07:33:12.685361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:44.706 [2024-11-26 07:33:12.685371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441100, cid 0, qid 0 00:23:44.706 [2024-11-26 07:33:12.685441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.706 [2024-11-26 07:33:12.685446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.706 [2024-11-26 07:33:12.685449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441100) on tqpair=0x13df690 00:23:44.706 [2024-11-26 07:33:12.685459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13df690) 00:23:44.706 [2024-11-26 07:33:12.685470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.706 [2024-11-26 07:33:12.685476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13df690) 00:23:44.706 [2024-11-26 07:33:12.685488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.706 [2024-11-26 07:33:12.685494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13df690) 00:23:44.706 [2024-11-26 07:33:12.685505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.706 [2024-11-26 07:33:12.685510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.706 [2024-11-26 07:33:12.685516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.706 [2024-11-26 07:33:12.685521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.707 [2024-11-26 07:33:12.685525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:44.707 [2024-11-26 07:33:12.685533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:44.707 [2024-11-26 07:33:12.685539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.685542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df690) 00:23:44.707 [2024-11-26 07:33:12.685547] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.707 [2024-11-26 07:33:12.685558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441100, cid 0, qid 0 00:23:44.707 [2024-11-26 07:33:12.685563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441280, cid 1, qid 0 00:23:44.707 [2024-11-26 07:33:12.685567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441400, cid 2, qid 0 00:23:44.707 [2024-11-26 07:33:12.685571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.707 [2024-11-26 07:33:12.685575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441700, cid 4, qid 0 00:23:44.707 [2024-11-26 07:33:12.685694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.707 [2024-11-26 07:33:12.685700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.707 [2024-11-26 07:33:12.685703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.685706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441700) on tqpair=0x13df690 00:23:44.707 [2024-11-26 07:33:12.685712] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:44.707 [2024-11-26 07:33:12.685717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:44.707 [2024-11-26 07:33:12.685725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:44.707 [2024-11-26 07:33:12.685730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:44.707 [2024-11-26 07:33:12.685736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.685739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.685742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df690) 00:23:44.707 [2024-11-26 07:33:12.685748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:44.707 [2024-11-26 07:33:12.685759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441700, cid 4, qid 0 00:23:44.707 [2024-11-26 07:33:12.685821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.707 [2024-11-26 07:33:12.685827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.707 [2024-11-26 07:33:12.685830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.685834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441700) on tqpair=0x13df690 00:23:44.707 [2024-11-26 07:33:12.685886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:44.707 [2024-11-26 07:33:12.685896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:44.707 [2024-11-26 07:33:12.685903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.685906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df690) 00:23:44.707 [2024-11-26 07:33:12.685912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.707 [2024-11-26 07:33:12.685922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441700, cid 4, qid 0 00:23:44.707 [2024-11-26 07:33:12.686009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.707 [2024-11-26 07:33:12.686016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.707 [2024-11-26 07:33:12.686019] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686022] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df690): datao=0, datal=4096, cccid=4 00:23:44.707 [2024-11-26 07:33:12.686026] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1441700) on tqpair(0x13df690): expected_datao=0, payload_size=4096 00:23:44.707 [2024-11-26 07:33:12.686030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686036] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686040] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.707 [2024-11-26 07:33:12.686055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.707 [2024-11-26 07:33:12.686058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441700) on tqpair=0x13df690 00:23:44.707 [2024-11-26 07:33:12.686069] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:44.707 [2024-11-26 07:33:12.686077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:44.707 [2024-11-26 07:33:12.686086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:44.707 [2024-11-26 07:33:12.686093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df690) 00:23:44.707 [2024-11-26 07:33:12.686102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.707 [2024-11-26 07:33:12.686112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441700, cid 4, qid 0 00:23:44.707 [2024-11-26 07:33:12.686210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.707 [2024-11-26 07:33:12.686216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.707 [2024-11-26 07:33:12.686218] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686222] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df690): datao=0, datal=4096, cccid=4 00:23:44.707 [2024-11-26 07:33:12.686228] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1441700) on tqpair(0x13df690): expected_datao=0, payload_size=4096 00:23:44.707 [2024-11-26 07:33:12.686232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686237] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686241] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.707 [2024-11-26 07:33:12.686255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.707 [2024-11-26 07:33:12.686258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441700) on tqpair=0x13df690 00:23:44.707 [2024-11-26 07:33:12.686272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:44.707 [2024-11-26 07:33:12.686281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:44.707 [2024-11-26 07:33:12.686287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df690) 00:23:44.707 [2024-11-26 07:33:12.686296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.707 [2024-11-26 07:33:12.686306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441700, cid 4, qid 0 00:23:44.707 [2024-11-26 07:33:12.686411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.707 [2024-11-26 07:33:12.686417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.707 [2024-11-26 07:33:12.686420] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686423] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df690): datao=0, datal=4096, cccid=4 00:23:44.707 [2024-11-26 07:33:12.686427] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1441700) on tqpair(0x13df690): expected_datao=0, payload_size=4096 00:23:44.707 [2024-11-26 07:33:12.686431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686436] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686440] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.707 [2024-11-26 07:33:12.686454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.707 [2024-11-26 07:33:12.686458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.707 [2024-11-26 07:33:12.686461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441700) on tqpair=0x13df690 00:23:44.707 [2024-11-26 07:33:12.686467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:44.708 [2024-11-26 07:33:12.686475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:44.708 [2024-11-26 07:33:12.686483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:44.708 [2024-11-26 07:33:12.686488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:44.708 [2024-11-26 07:33:12.686493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:44.708 [2024-11-26 07:33:12.686497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:44.708 [2024-11-26 07:33:12.686504] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:44.708 [2024-11-26 07:33:12.686508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:44.708 [2024-11-26 07:33:12.686512] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:44.708 [2024-11-26 07:33:12.686525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.686529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df690) 00:23:44.708 [2024-11-26 07:33:12.686535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.708 [2024-11-26 07:33:12.686540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.686543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.686546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13df690) 00:23:44.708 [2024-11-26 07:33:12.686552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.708 [2024-11-26 07:33:12.686564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441700, cid 4, qid 0 00:23:44.708 [2024-11-26 07:33:12.686570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441880, cid 5, qid 0 00:23:44.708 [2024-11-26 07:33:12.686686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.708 [2024-11-26 07:33:12.686692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.708 [2024-11-26 07:33:12.686695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.686698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441700) on tqpair=0x13df690 00:23:44.708 [2024-11-26 07:33:12.686704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.708 [2024-11-26 07:33:12.686709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.708 [2024-11-26 07:33:12.686712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.686715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441880) on tqpair=0x13df690 00:23:44.708 [2024-11-26 07:33:12.686723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.686727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13df690) 00:23:44.708 [2024-11-26 07:33:12.686732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.708 [2024-11-26 07:33:12.686742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441880, cid 5, qid 0 00:23:44.708 [2024-11-26 07:33:12.686845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.708 [2024-11-26 07:33:12.686851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.708 [2024-11-26 07:33:12.686854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.686857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441880) on tqpair=0x13df690 00:23:44.708 [2024-11-26 07:33:12.686866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.686870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13df690) 00:23:44.708 [2024-11-26 07:33:12.686875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.708 [2024-11-26 07:33:12.686885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441880, cid 5, qid 0 00:23:44.708 [2024-11-26 07:33:12.686989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.708 [2024-11-26 07:33:12.686996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.708 [2024-11-26 07:33:12.686999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441880) on tqpair=0x13df690 00:23:44.708 [2024-11-26 07:33:12.687012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13df690) 00:23:44.708 [2024-11-26 07:33:12.687021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.708 [2024-11-26 07:33:12.687031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441880, cid 5, qid 0 00:23:44.708 [2024-11-26 07:33:12.687092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.708 [2024-11-26 07:33:12.687098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.708 [2024-11-26 07:33:12.687101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441880) on tqpair=0x13df690 00:23:44.708 [2024-11-26 07:33:12.687116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13df690) 00:23:44.708 [2024-11-26 07:33:12.687126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.708 [2024-11-26 07:33:12.687132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13df690) 00:23:44.708 [2024-11-26 07:33:12.687140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.708 [2024-11-26 07:33:12.687146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x13df690) 00:23:44.708 [2024-11-26 07:33:12.687155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.708 [2024-11-26 07:33:12.687161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13df690) 00:23:44.708 [2024-11-26 07:33:12.687169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.708 [2024-11-26 07:33:12.687180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441880, cid 5, qid 0 00:23:44.708 [2024-11-26 07:33:12.687185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441700, cid 4, qid 0 00:23:44.708 [2024-11-26 07:33:12.687189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441a00, cid 6, qid 0 00:23:44.708 [2024-11-26 07:33:12.687193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441b80, cid 7, qid 0 00:23:44.708 [2024-11-26 07:33:12.687336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.708 [2024-11-26 07:33:12.687342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.708 [2024-11-26 07:33:12.687345] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687348] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df690): datao=0, datal=8192, cccid=5 00:23:44.708 [2024-11-26 07:33:12.687352] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1441880) on tqpair(0x13df690): expected_datao=0, payload_size=8192 00:23:44.708 [2024-11-26 07:33:12.687356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687393] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687397] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.708 [2024-11-26 07:33:12.687410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.708 [2024-11-26 07:33:12.687413] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687416] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df690): datao=0, datal=512, cccid=4 00:23:44.708 [2024-11-26 07:33:12.687420] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1441700) on tqpair(0x13df690): expected_datao=0, payload_size=512 00:23:44.708 [2024-11-26 07:33:12.687424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687429] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687432] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.708 [2024-11-26 07:33:12.687442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.708 [2024-11-26 07:33:12.687445] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687448] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df690): datao=0, datal=512, cccid=6 00:23:44.708 [2024-11-26 07:33:12.687451] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1441a00) on tqpair(0x13df690): expected_datao=0, payload_size=512 00:23:44.708 [2024-11-26 07:33:12.687455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687460] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687464] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:44.708 [2024-11-26 07:33:12.687473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:44.708 [2024-11-26 07:33:12.687476] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687479] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13df690): datao=0, datal=4096, cccid=7 00:23:44.708 [2024-11-26 07:33:12.687483] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1441b80) on tqpair(0x13df690): expected_datao=0, payload_size=4096 00:23:44.708 [2024-11-26 07:33:12.687487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687492] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687496] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.708 [2024-11-26 07:33:12.687508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.708 [2024-11-26 07:33:12.687511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.708 [2024-11-26 07:33:12.687514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441880) on tqpair=0x13df690 00:23:44.708 [2024-11-26 07:33:12.687524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.709 [2024-11-26 07:33:12.687529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.709 [2024-11-26 07:33:12.687532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.709 [2024-11-26 07:33:12.687535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441700) on tqpair=0x13df690 00:23:44.709 [2024-11-26 07:33:12.687544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.709 [2024-11-26 07:33:12.687549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.709 [2024-11-26 07:33:12.687552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.709 [2024-11-26 07:33:12.687556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441a00) on tqpair=0x13df690 00:23:44.709 [2024-11-26 07:33:12.687561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.709 [2024-11-26 07:33:12.687566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.709 [2024-11-26 07:33:12.687569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.709 [2024-11-26 07:33:12.687574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441b80) on tqpair=0x13df690 00:23:44.709 ===================================================== 00:23:44.709 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.709 ===================================================== 00:23:44.709 Controller Capabilities/Features 00:23:44.709 ================================ 00:23:44.709 Vendor ID: 8086 00:23:44.709 Subsystem Vendor ID: 8086 00:23:44.709 Serial Number: SPDK00000000000001 00:23:44.709 Model Number: SPDK bdev Controller 00:23:44.709 Firmware Version: 25.01 00:23:44.709 Recommended Arb Burst: 6 00:23:44.709 IEEE OUI Identifier: e4 d2 5c 00:23:44.709 Multi-path I/O 00:23:44.709 May have multiple subsystem ports: Yes 00:23:44.709 May have multiple controllers: Yes 00:23:44.709 Associated with SR-IOV VF: No 00:23:44.709 Max Data Transfer Size: 131072 00:23:44.709 Max Number of Namespaces: 32 00:23:44.709 Max Number of I/O Queues: 127 00:23:44.709 NVMe Specification Version (VS): 1.3 00:23:44.709 NVMe Specification Version (Identify): 1.3 00:23:44.709 Maximum Queue Entries: 128 00:23:44.709 Contiguous Queues Required: Yes 00:23:44.709 Arbitration Mechanisms Supported 00:23:44.709 Weighted Round Robin: Not Supported 00:23:44.709 Vendor Specific: Not Supported 00:23:44.709 Reset Timeout: 15000 ms 00:23:44.709 Doorbell Stride: 4 bytes 00:23:44.709 NVM Subsystem Reset: Not Supported 00:23:44.709 Command Sets Supported 00:23:44.709 NVM Command Set: Supported 00:23:44.709 Boot Partition: Not Supported 00:23:44.709 Memory Page Size Minimum: 4096 bytes 00:23:44.709 Memory Page Size Maximum: 4096 bytes 00:23:44.709 Persistent Memory Region: Not Supported 00:23:44.709 Optional Asynchronous Events Supported 00:23:44.709 Namespace Attribute Notices: Supported 00:23:44.709 Firmware Activation Notices: Not Supported 00:23:44.709 ANA Change Notices: Not Supported 00:23:44.709 PLE Aggregate Log Change Notices: Not Supported 00:23:44.709 LBA Status Info Alert Notices: Not Supported 00:23:44.709 EGE Aggregate Log Change Notices: Not Supported 00:23:44.709 Normal NVM Subsystem Shutdown event: Not Supported 00:23:44.709 Zone Descriptor Change Notices: Not Supported 00:23:44.709 Discovery Log Change Notices: Not Supported 00:23:44.709 Controller Attributes 00:23:44.709 128-bit Host Identifier: Supported 00:23:44.709 Non-Operational Permissive Mode: Not Supported 00:23:44.709 NVM Sets: Not Supported 00:23:44.709 Read Recovery Levels: Not Supported 00:23:44.709 Endurance Groups: Not Supported 00:23:44.709 Predictable Latency Mode: Not Supported 00:23:44.709 Traffic Based Keep ALive: Not Supported 00:23:44.709 Namespace Granularity: Not Supported 00:23:44.709 SQ Associations: Not Supported 00:23:44.709 UUID List: Not Supported 00:23:44.709 Multi-Domain Subsystem: Not Supported 00:23:44.709 Fixed Capacity Management: Not Supported 00:23:44.709 Variable Capacity Management: Not Supported 00:23:44.709 Delete Endurance Group: Not Supported 00:23:44.709 Delete NVM Set: Not Supported 00:23:44.709 Extended LBA Formats Supported: Not Supported 00:23:44.709 Flexible Data Placement Supported: Not Supported 00:23:44.709 00:23:44.709 Controller Memory Buffer Support 00:23:44.709 ================================ 00:23:44.709 Supported: No 00:23:44.709 00:23:44.709 Persistent Memory Region Support 00:23:44.709 ================================ 00:23:44.709 Supported: No 00:23:44.709 00:23:44.709 Admin Command Set Attributes 00:23:44.709 ============================ 00:23:44.709 Security Send/Receive: Not Supported 00:23:44.709 Format NVM: Not Supported 00:23:44.709 Firmware Activate/Download: Not Supported 00:23:44.709 Namespace Management: Not Supported 00:23:44.709 Device Self-Test: Not Supported 00:23:44.709 Directives: Not Supported 00:23:44.709 NVMe-MI: Not Supported 00:23:44.709 Virtualization Management: Not Supported 00:23:44.709 Doorbell Buffer Config: Not Supported 00:23:44.709 Get LBA Status Capability: Not Supported 00:23:44.709 Command & Feature Lockdown Capability: Not Supported 00:23:44.709 Abort Command Limit: 4 00:23:44.709 Async Event Request Limit: 4 00:23:44.709 Number of Firmware Slots: N/A 00:23:44.709 Firmware Slot 1 Read-Only: N/A 00:23:44.709 Firmware Activation Without Reset: N/A 00:23:44.709 Multiple Update Detection Support: N/A 00:23:44.709 Firmware Update Granularity: No Information Provided 00:23:44.709 Per-Namespace SMART Log: No 00:23:44.709 Asymmetric Namespace Access Log Page: Not Supported 00:23:44.709 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:44.709 Command Effects Log Page: Supported 00:23:44.709 Get Log Page Extended Data: Supported 00:23:44.709 Telemetry Log Pages: Not Supported 00:23:44.709 Persistent Event Log Pages: Not Supported 00:23:44.709 Supported Log Pages Log Page: May Support 00:23:44.709 Commands Supported & Effects Log Page: Not Supported 00:23:44.709 Feature Identifiers & Effects Log Page:May Support 00:23:44.709 NVMe-MI Commands & Effects Log Page: May Support 00:23:44.709 Data Area 4 for Telemetry Log: Not Supported 00:23:44.709 Error Log Page Entries Supported: 128 00:23:44.709 Keep Alive: Supported 00:23:44.709 Keep Alive Granularity: 10000 ms 00:23:44.709 00:23:44.709 NVM Command Set Attributes 00:23:44.709 ========================== 00:23:44.709 Submission Queue Entry Size 00:23:44.709 Max: 64 00:23:44.709 Min: 64 00:23:44.709 Completion Queue Entry Size 00:23:44.709 Max: 16 00:23:44.709 Min: 16 00:23:44.709 Number of Namespaces: 32 00:23:44.709 Compare Command: Supported 00:23:44.709 Write Uncorrectable Command: Not Supported 00:23:44.709 Dataset Management Command: Supported 00:23:44.709 Write Zeroes Command: Supported 00:23:44.709 Set Features Save Field: Not Supported 00:23:44.709 Reservations: Supported 00:23:44.709 Timestamp: Not Supported 00:23:44.709 Copy: Supported 00:23:44.709 Volatile Write Cache: Present 00:23:44.709 Atomic Write Unit (Normal): 1 00:23:44.709 Atomic Write Unit (PFail): 1 00:23:44.709 Atomic Compare & Write Unit: 1 00:23:44.709 Fused Compare & Write: Supported 00:23:44.709 Scatter-Gather List 00:23:44.709 SGL Command Set: Supported 00:23:44.709 SGL Keyed: Supported 00:23:44.709 SGL Bit Bucket Descriptor: Not Supported 00:23:44.709 SGL Metadata Pointer: Not Supported 00:23:44.709 Oversized SGL: Not Supported 00:23:44.709 SGL Metadata Address: Not Supported 00:23:44.709 SGL Offset: Supported 00:23:44.709 Transport SGL Data Block: Not Supported 00:23:44.709 Replay Protected Memory Block: Not Supported 00:23:44.709 00:23:44.709 Firmware Slot Information 00:23:44.709 ========================= 00:23:44.709 Active slot: 1 00:23:44.709 Slot 1 Firmware Revision: 25.01 00:23:44.709 00:23:44.709 00:23:44.709 Commands Supported and Effects 00:23:44.709 ============================== 00:23:44.709 Admin Commands 00:23:44.709 -------------- 00:23:44.709 Get Log Page (02h): Supported 00:23:44.709 Identify (06h): Supported 00:23:44.709 Abort (08h): Supported 00:23:44.709 Set Features (09h): Supported 00:23:44.709 Get Features (0Ah): Supported 00:23:44.709 Asynchronous Event Request (0Ch): Supported 00:23:44.709 Keep Alive (18h): Supported 00:23:44.709 I/O Commands 00:23:44.709 ------------ 00:23:44.709 Flush (00h): Supported LBA-Change 00:23:44.709 Write (01h): Supported LBA-Change 00:23:44.709 Read (02h): Supported 00:23:44.709 Compare (05h): Supported 00:23:44.709 Write Zeroes (08h): Supported LBA-Change 00:23:44.709 Dataset Management (09h): Supported LBA-Change 00:23:44.709 Copy (19h): Supported LBA-Change 00:23:44.709 00:23:44.709 Error Log 00:23:44.709 ========= 00:23:44.709 00:23:44.709 Arbitration 00:23:44.709 =========== 00:23:44.709 Arbitration Burst: 1 00:23:44.709 00:23:44.709 Power Management 00:23:44.709 ================ 00:23:44.709 Number of Power States: 1 00:23:44.709 Current Power State: Power State #0 00:23:44.709 Power State #0: 00:23:44.709 Max Power: 0.00 W 00:23:44.709 Non-Operational State: Operational 00:23:44.709 Entry Latency: Not Reported 00:23:44.709 Exit Latency: Not Reported 00:23:44.709 Relative Read Throughput: 0 00:23:44.709 Relative Read Latency: 0 00:23:44.710 Relative Write Throughput: 0 00:23:44.710 Relative Write Latency: 0 00:23:44.710 Idle Power: Not Reported 00:23:44.710 Active Power: Not Reported 00:23:44.710 Non-Operational Permissive Mode: Not Supported 00:23:44.710 00:23:44.710 Health Information 00:23:44.710 ================== 00:23:44.710 Critical Warnings: 00:23:44.710 Available Spare Space: OK 00:23:44.710 Temperature: OK 00:23:44.710 Device Reliability: OK 00:23:44.710 Read Only: No 00:23:44.710 Volatile Memory Backup: OK 00:23:44.710 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:44.710 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:44.710 Available Spare: 0% 00:23:44.710 Available Spare Threshold: 0% 00:23:44.710 Life Percentage Used:[2024-11-26 07:33:12.687658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.687662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13df690) 00:23:44.710 [2024-11-26 07:33:12.687668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.710 [2024-11-26 07:33:12.687679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441b80, cid 7, qid 0 00:23:44.710 [2024-11-26 07:33:12.687750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.710 [2024-11-26 07:33:12.687756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.710 [2024-11-26 07:33:12.687760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.687763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441b80) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.687790] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:44.710 [2024-11-26 07:33:12.687800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441100) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.687805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.710 [2024-11-26 07:33:12.687810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441280) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.687814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.710 [2024-11-26 07:33:12.687818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441400) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.687822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.710 [2024-11-26 07:33:12.687826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.687830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.710 [2024-11-26 07:33:12.687837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.687840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.687844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.710 [2024-11-26 07:33:12.687850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.710 [2024-11-26 07:33:12.687860] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.710 [2024-11-26 07:33:12.691954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.710 [2024-11-26 07:33:12.691963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.710 [2024-11-26 07:33:12.691966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.691969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.691975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.691979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.691982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.710 [2024-11-26 07:33:12.691988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.710 [2024-11-26 07:33:12.692004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.710 [2024-11-26 07:33:12.692208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.710 [2024-11-26 07:33:12.692214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.710 [2024-11-26 07:33:12.692222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.692230] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:44.710 [2024-11-26 07:33:12.692234] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:44.710 [2024-11-26 07:33:12.692243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.710 [2024-11-26 07:33:12.692255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.710 [2024-11-26 07:33:12.692265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.710 [2024-11-26 07:33:12.692358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.710 [2024-11-26 07:33:12.692364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.710 [2024-11-26 07:33:12.692367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.692379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.710 [2024-11-26 07:33:12.692391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.710 [2024-11-26 07:33:12.692401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.710 [2024-11-26 07:33:12.692468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.710 [2024-11-26 07:33:12.692474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.710 [2024-11-26 07:33:12.692477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.692489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.710 [2024-11-26 07:33:12.692501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.710 [2024-11-26 07:33:12.692510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.710 [2024-11-26 07:33:12.692610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.710 [2024-11-26 07:33:12.692616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.710 [2024-11-26 07:33:12.692619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.692631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.710 [2024-11-26 07:33:12.692643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.710 [2024-11-26 07:33:12.692652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.710 [2024-11-26 07:33:12.692761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.710 [2024-11-26 07:33:12.692767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.710 [2024-11-26 07:33:12.692770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.692781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.710 [2024-11-26 07:33:12.692793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.710 [2024-11-26 07:33:12.692803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.710 [2024-11-26 07:33:12.692912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.710 [2024-11-26 07:33:12.692918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.710 [2024-11-26 07:33:12.692921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.692932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.692939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.710 [2024-11-26 07:33:12.692944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.710 [2024-11-26 07:33:12.692959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.710 [2024-11-26 07:33:12.693022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.710 [2024-11-26 07:33:12.693028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.710 [2024-11-26 07:33:12.693031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.693034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.710 [2024-11-26 07:33:12.693042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.693046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.710 [2024-11-26 07:33:12.693048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.710 [2024-11-26 07:33:12.693054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.710 [2024-11-26 07:33:12.693064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.711 [2024-11-26 07:33:12.693164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.711 [2024-11-26 07:33:12.693169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.711 [2024-11-26 07:33:12.693172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.711 [2024-11-26 07:33:12.693184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.711 [2024-11-26 07:33:12.693196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.711 [2024-11-26 07:33:12.693205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.711 [2024-11-26 07:33:12.693264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.711 [2024-11-26 07:33:12.693270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.711 [2024-11-26 07:33:12.693275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.711 [2024-11-26 07:33:12.693287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.711 [2024-11-26 07:33:12.693299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.711 [2024-11-26 07:33:12.693308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.711 [2024-11-26 07:33:12.693415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.711 [2024-11-26 07:33:12.693421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.711 [2024-11-26 07:33:12.693424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.711 [2024-11-26 07:33:12.693435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.711 [2024-11-26 07:33:12.693448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.711 [2024-11-26 07:33:12.693457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.711 [2024-11-26 07:33:12.693526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.711 [2024-11-26 07:33:12.693531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.711 [2024-11-26 07:33:12.693534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.711 [2024-11-26 07:33:12.693547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.711 [2024-11-26 07:33:12.693559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.711 [2024-11-26 07:33:12.693569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.711 [2024-11-26 07:33:12.693669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.711 [2024-11-26 07:33:12.693675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.711 [2024-11-26 07:33:12.693678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.711 [2024-11-26 07:33:12.693689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.711 [2024-11-26 07:33:12.693701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.711 [2024-11-26 07:33:12.693710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.711 [2024-11-26 07:33:12.693819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.711 [2024-11-26 07:33:12.693825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.711 [2024-11-26 07:33:12.693828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.711 [2024-11-26 07:33:12.693841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.711 [2024-11-26 07:33:12.693853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.711 [2024-11-26 07:33:12.693862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.711 [2024-11-26 07:33:12.693970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.711 [2024-11-26 07:33:12.693976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.711 [2024-11-26 07:33:12.693979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.711 [2024-11-26 07:33:12.693991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.693997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.711 [2024-11-26 07:33:12.694003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.711 [2024-11-26 07:33:12.694012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.711 [2024-11-26 07:33:12.694076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.711 [2024-11-26 07:33:12.694082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.711 [2024-11-26 07:33:12.694085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.694088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.711 [2024-11-26 07:33:12.694096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.694100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.694103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.711 [2024-11-26 07:33:12.694108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.711 [2024-11-26 07:33:12.694117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.711 [2024-11-26 07:33:12.694223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.711 [2024-11-26 07:33:12.694229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.711 [2024-11-26 07:33:12.694232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.694235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.711 [2024-11-26 07:33:12.694243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.694246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.711 [2024-11-26 07:33:12.694249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.711 [2024-11-26 07:33:12.694255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.712 [2024-11-26 07:33:12.694264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.712 [2024-11-26 07:33:12.694373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.712 [2024-11-26 07:33:12.694379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.712 [2024-11-26 07:33:12.694382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.712 [2024-11-26 07:33:12.694394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.712 [2024-11-26 07:33:12.694407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.712 [2024-11-26 07:33:12.694416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.712 [2024-11-26 07:33:12.694525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.712 [2024-11-26 07:33:12.694531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.712 [2024-11-26 07:33:12.694534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.712 [2024-11-26 07:33:12.694546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.712 [2024-11-26 07:33:12.694558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.712 [2024-11-26 07:33:12.694567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.712 [2024-11-26 07:33:12.694629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.712 [2024-11-26 07:33:12.694635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.712 [2024-11-26 07:33:12.694637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.712 [2024-11-26 07:33:12.694649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.712 [2024-11-26 07:33:12.694661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.712 [2024-11-26 07:33:12.694671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.712 [2024-11-26 07:33:12.694777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.712 [2024-11-26 07:33:12.694782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.712 [2024-11-26 07:33:12.694786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.712 [2024-11-26 07:33:12.694797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.712 [2024-11-26 07:33:12.694809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.712 [2024-11-26 07:33:12.694818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.712 [2024-11-26 07:33:12.694927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.712 [2024-11-26 07:33:12.694933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.712 [2024-11-26 07:33:12.694936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.712 [2024-11-26 07:33:12.694950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.694959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.712 [2024-11-26 07:33:12.694965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.712 [2024-11-26 07:33:12.694974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.712 [2024-11-26 07:33:12.695080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.712 [2024-11-26 07:33:12.695086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.712 [2024-11-26 07:33:12.695089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.695093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.712 [2024-11-26 07:33:12.695101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.695104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.695107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.712 [2024-11-26 07:33:12.695113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.712 [2024-11-26 07:33:12.695123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.712 [2024-11-26 07:33:12.695191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.712 [2024-11-26 07:33:12.695197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.712 [2024-11-26 07:33:12.695200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.695203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.712 [2024-11-26 07:33:12.695211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.695215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.695218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.712 [2024-11-26 07:33:12.695223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.712 [2024-11-26 07:33:12.695233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.712 [2024-11-26 07:33:12.695332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.712 [2024-11-26 07:33:12.695338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.712 [2024-11-26 07:33:12.695341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.695344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.712 [2024-11-26 07:33:12.695352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.695356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.695359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.712 [2024-11-26 07:33:12.695365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.712 [2024-11-26 07:33:12.695374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.712 [2024-11-26 07:33:12.698953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.712 [2024-11-26 07:33:12.698960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.712 [2024-11-26 07:33:12.698964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.698967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.712 [2024-11-26 07:33:12.698977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.698981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.698986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13df690) 00:23:44.712 [2024-11-26 07:33:12.698992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.712 [2024-11-26 07:33:12.699002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1441580, cid 3, qid 0 00:23:44.712 [2024-11-26 07:33:12.699186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:44.712 [2024-11-26 07:33:12.699192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:44.712 [2024-11-26 07:33:12.699195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:44.712 [2024-11-26 07:33:12.699198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1441580) on tqpair=0x13df690 00:23:44.712 [2024-11-26 07:33:12.699205] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:23:44.712 0% 00:23:44.712 Data Units Read: 0 00:23:44.712 Data Units Written: 0 00:23:44.712 Host Read Commands: 0 00:23:44.712 Host Write Commands: 0 00:23:44.712 Controller Busy Time: 0 minutes 00:23:44.712 Power Cycles: 0 00:23:44.712 Power On Hours: 0 hours 00:23:44.712 Unsafe Shutdowns: 0 00:23:44.712 Unrecoverable Media Errors: 0 00:23:44.712 Lifetime Error Log Entries: 0 00:23:44.712 Warning Temperature Time: 0 minutes 00:23:44.712 Critical Temperature Time: 0 minutes 00:23:44.712 00:23:44.712 Number of Queues 00:23:44.712 ================ 00:23:44.712 Number of I/O Submission Queues: 127 00:23:44.712 Number of I/O Completion Queues: 127 00:23:44.712 00:23:44.712 Active Namespaces 00:23:44.712 ================= 00:23:44.712 Namespace ID:1 00:23:44.712 Error Recovery Timeout: Unlimited 00:23:44.712 Command Set Identifier: NVM (00h) 00:23:44.712 Deallocate: Supported 00:23:44.712 Deallocated/Unwritten Error: Not Supported 00:23:44.712 Deallocated Read Value: Unknown 00:23:44.712 Deallocate in Write Zeroes: Not Supported 00:23:44.712 Deallocated Guard Field: 0xFFFF 00:23:44.712 Flush: Supported 00:23:44.712 Reservation: Supported 00:23:44.712 Namespace Sharing Capabilities: Multiple Controllers 00:23:44.712 Size (in LBAs): 131072 (0GiB) 00:23:44.712 Capacity (in LBAs): 131072 (0GiB) 00:23:44.712 Utilization (in LBAs): 131072 (0GiB) 00:23:44.712 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:44.712 EUI64: ABCDEF0123456789 00:23:44.712 UUID: 94dd3680-f05c-4283-b29b-abd39731799f 00:23:44.712 Thin Provisioning: Not Supported 00:23:44.712 Per-NS Atomic Units: Yes 00:23:44.713 Atomic Boundary Size (Normal): 0 00:23:44.713 Atomic Boundary Size (PFail): 0 00:23:44.713 Atomic Boundary Offset: 0 00:23:44.713 Maximum Single Source Range Length: 65535 00:23:44.713 Maximum Copy Length: 65535 00:23:44.713 Maximum Source Range Count: 1 00:23:44.713 NGUID/EUI64 Never Reused: No 00:23:44.713 Namespace Write Protected: No 00:23:44.713 Number of LBA Formats: 1 00:23:44.713 Current LBA Format: LBA Format #00 00:23:44.713 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:44.713 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.713 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.713 rmmod nvme_tcp 00:23:44.713 rmmod nvme_fabrics 00:23:44.713 rmmod nvme_keyring 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 818425 ']' 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 818425 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 818425 ']' 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 818425 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 818425 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 818425' 00:23:44.973 killing process with pid 818425 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 818425 00:23:44.973 07:33:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 818425 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.973 07:33:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.511 00:23:47.511 real 0m8.553s 00:23:47.511 user 0m5.281s 00:23:47.511 sys 0m4.297s 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.511 ************************************ 00:23:47.511 END TEST nvmf_identify 00:23:47.511 ************************************ 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.511 ************************************ 00:23:47.511 START TEST nvmf_perf 00:23:47.511 ************************************ 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:47.511 * Looking for test storage... 00:23:47.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:47.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.511 --rc genhtml_branch_coverage=1 00:23:47.511 --rc genhtml_function_coverage=1 00:23:47.511 --rc genhtml_legend=1 00:23:47.511 --rc geninfo_all_blocks=1 00:23:47.511 --rc geninfo_unexecuted_blocks=1 00:23:47.511 00:23:47.511 ' 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:47.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.511 --rc genhtml_branch_coverage=1 00:23:47.511 --rc genhtml_function_coverage=1 00:23:47.511 --rc genhtml_legend=1 00:23:47.511 --rc geninfo_all_blocks=1 00:23:47.511 --rc geninfo_unexecuted_blocks=1 00:23:47.511 00:23:47.511 ' 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:47.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.511 --rc genhtml_branch_coverage=1 00:23:47.511 --rc genhtml_function_coverage=1 00:23:47.511 --rc genhtml_legend=1 00:23:47.511 --rc geninfo_all_blocks=1 00:23:47.511 --rc geninfo_unexecuted_blocks=1 00:23:47.511 00:23:47.511 ' 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:47.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.511 --rc genhtml_branch_coverage=1 00:23:47.511 --rc genhtml_function_coverage=1 00:23:47.511 --rc genhtml_legend=1 00:23:47.511 --rc geninfo_all_blocks=1 00:23:47.511 --rc geninfo_unexecuted_blocks=1 00:23:47.511 00:23:47.511 ' 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.511 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.512 07:33:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:52.787 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.787 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:52.787 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:52.788 Found net devices under 0000:86:00.0: cvl_0_0 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:52.788 Found net devices under 0000:86:00.1: cvl_0_1 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:23:52.788 00:23:52.788 --- 10.0.0.2 ping statistics --- 00:23:52.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.788 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:23:52.788 00:23:52.788 --- 10.0.0.1 ping statistics --- 00:23:52.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.788 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=822150 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 822150 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 822150 ']' 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.788 07:33:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.788 [2024-11-26 07:33:20.837659] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:23:52.788 [2024-11-26 07:33:20.837700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.048 [2024-11-26 07:33:20.903803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.048 [2024-11-26 07:33:20.947124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.048 [2024-11-26 07:33:20.947162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.048 [2024-11-26 07:33:20.947170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.048 [2024-11-26 07:33:20.947179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.048 [2024-11-26 07:33:20.947200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.048 [2024-11-26 07:33:20.948807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.048 [2024-11-26 07:33:20.948920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.048 [2024-11-26 07:33:20.949009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.048 [2024-11-26 07:33:20.949011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.048 07:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.048 07:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:53.048 07:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.048 07:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.048 07:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:53.048 07:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.048 07:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:53.048 07:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:56.366 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:56.366 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:56.366 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:56.366 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:56.625 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:56.625 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:56.625 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:56.625 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:56.625 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:56.884 [2024-11-26 07:33:24.723061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.884 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:56.884 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:56.884 07:33:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.142 07:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:57.143 07:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:57.401 07:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.661 [2024-11-26 07:33:25.542206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.661 07:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:57.920 07:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:57.920 07:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:57.920 07:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:57.920 07:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:59.300 Initializing NVMe Controllers 00:23:59.300 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:59.300 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:59.300 Initialization complete. Launching workers. 00:23:59.300 ======================================================== 00:23:59.300 Latency(us) 00:23:59.300 Device Information : IOPS MiB/s Average min max 00:23:59.300 PCIE (0000:5e:00.0) NSID 1 from core 0: 97731.50 381.76 326.80 10.54 4475.54 00:23:59.300 ======================================================== 00:23:59.300 Total : 97731.50 381.76 326.80 10.54 4475.54 00:23:59.300 00:23:59.300 07:33:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:00.680 Initializing NVMe Controllers 00:24:00.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:00.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:00.680 Initialization complete. Launching workers. 00:24:00.680 ======================================================== 00:24:00.680 Latency(us) 00:24:00.680 Device Information : IOPS MiB/s Average min max 00:24:00.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 12923.10 110.82 44980.44 00:24:00.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.00 0.21 19239.43 3724.23 47902.25 00:24:00.680 ======================================================== 00:24:00.680 Total : 132.00 0.52 15459.20 110.82 47902.25 00:24:00.680 00:24:00.680 07:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:02.070 Initializing NVMe Controllers 00:24:02.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:02.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:02.070 Initialization complete. Launching workers. 00:24:02.070 ======================================================== 00:24:02.070 Latency(us) 00:24:02.070 Device Information : IOPS MiB/s Average min max 00:24:02.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10892.00 42.55 2937.34 453.99 6254.58 00:24:02.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3837.00 14.99 8393.84 7145.08 15919.97 00:24:02.070 ======================================================== 00:24:02.070 Total : 14729.00 57.54 4358.80 453.99 15919.97 00:24:02.070 00:24:02.070 07:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:02.070 07:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:02.070 07:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:04.610 Initializing NVMe Controllers 00:24:04.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.610 Controller IO queue size 128, less than required. 00:24:04.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.610 Controller IO queue size 128, less than required. 00:24:04.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:04.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:04.610 Initialization complete. Launching workers. 00:24:04.610 ======================================================== 00:24:04.610 Latency(us) 00:24:04.610 Device Information : IOPS MiB/s Average min max 00:24:04.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1748.59 437.15 74216.09 51371.34 116503.07 00:24:04.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 600.36 150.09 222338.78 72466.55 346425.03 00:24:04.610 ======================================================== 00:24:04.610 Total : 2348.95 587.24 112074.22 51371.34 346425.03 00:24:04.610 00:24:04.610 07:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:04.610 No valid NVMe controllers or AIO or URING devices found 00:24:04.610 Initializing NVMe Controllers 00:24:04.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.610 Controller IO queue size 128, less than required. 00:24:04.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.610 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:04.610 Controller IO queue size 128, less than required. 00:24:04.610 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.610 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:04.610 WARNING: Some requested NVMe devices were skipped 00:24:04.610 07:33:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:07.149 Initializing NVMe Controllers 00:24:07.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.149 Controller IO queue size 128, less than required. 00:24:07.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.149 Controller IO queue size 128, less than required. 00:24:07.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:07.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:07.149 Initialization complete. Launching workers. 00:24:07.149 00:24:07.149 ==================== 00:24:07.149 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:07.149 TCP transport: 00:24:07.149 polls: 11685 00:24:07.149 idle_polls: 8181 00:24:07.149 sock_completions: 3504 00:24:07.149 nvme_completions: 6447 00:24:07.149 submitted_requests: 9700 00:24:07.149 queued_requests: 1 00:24:07.149 00:24:07.149 ==================== 00:24:07.149 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:07.149 TCP transport: 00:24:07.149 polls: 11322 00:24:07.149 idle_polls: 7575 00:24:07.149 sock_completions: 3747 00:24:07.149 nvme_completions: 6411 00:24:07.149 submitted_requests: 9508 00:24:07.149 queued_requests: 1 00:24:07.149 ======================================================== 00:24:07.149 Latency(us) 00:24:07.149 Device Information : IOPS MiB/s Average min max 00:24:07.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1611.31 402.83 82095.67 51676.70 158437.81 00:24:07.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1602.31 400.58 80301.25 46224.10 118223.83 00:24:07.149 ======================================================== 00:24:07.149 Total : 3213.63 803.41 81200.98 46224.10 158437.81 00:24:07.149 00:24:07.149 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:07.149 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.408 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:07.408 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:07.408 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:07.408 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:07.408 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:07.409 rmmod nvme_tcp 00:24:07.409 rmmod nvme_fabrics 00:24:07.409 rmmod nvme_keyring 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 822150 ']' 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 822150 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 822150 ']' 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 822150 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.409 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 822150 00:24:07.668 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.668 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.668 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 822150' 00:24:07.668 killing process with pid 822150 00:24:07.668 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 822150 00:24:07.668 07:33:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 822150 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.048 07:33:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:11.584 00:24:11.584 real 0m23.932s 00:24:11.584 user 1m3.617s 00:24:11.584 sys 0m8.017s 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:11.584 ************************************ 00:24:11.584 END TEST nvmf_perf 00:24:11.584 ************************************ 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.584 ************************************ 00:24:11.584 START TEST nvmf_fio_host 00:24:11.584 ************************************ 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:11.584 * Looking for test storage... 00:24:11.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.584 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:11.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.585 --rc genhtml_branch_coverage=1 00:24:11.585 --rc genhtml_function_coverage=1 00:24:11.585 --rc genhtml_legend=1 00:24:11.585 --rc geninfo_all_blocks=1 00:24:11.585 --rc geninfo_unexecuted_blocks=1 00:24:11.585 00:24:11.585 ' 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:11.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.585 --rc genhtml_branch_coverage=1 00:24:11.585 --rc genhtml_function_coverage=1 00:24:11.585 --rc genhtml_legend=1 00:24:11.585 --rc geninfo_all_blocks=1 00:24:11.585 --rc geninfo_unexecuted_blocks=1 00:24:11.585 00:24:11.585 ' 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:11.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.585 --rc genhtml_branch_coverage=1 00:24:11.585 --rc genhtml_function_coverage=1 00:24:11.585 --rc genhtml_legend=1 00:24:11.585 --rc geninfo_all_blocks=1 00:24:11.585 --rc geninfo_unexecuted_blocks=1 00:24:11.585 00:24:11.585 ' 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:11.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.585 --rc genhtml_branch_coverage=1 00:24:11.585 --rc genhtml_function_coverage=1 00:24:11.585 --rc genhtml_legend=1 00:24:11.585 --rc geninfo_all_blocks=1 00:24:11.585 --rc geninfo_unexecuted_blocks=1 00:24:11.585 00:24:11.585 ' 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.585 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:11.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:11.586 07:33:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:16.867 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:16.867 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:16.867 Found net devices under 0000:86:00.0: cvl_0_0 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:16.867 Found net devices under 0000:86:00.1: cvl_0_1 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.867 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.868 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:16.868 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:16.868 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.868 07:33:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.127 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.127 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.127 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.127 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.127 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.127 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:24:17.128 00:24:17.128 --- 10.0.0.2 ping statistics --- 00:24:17.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.128 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:17.128 00:24:17.128 --- 10.0.0.1 ping statistics --- 00:24:17.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.128 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=828248 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 828248 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 828248 ']' 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.128 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.388 [2024-11-26 07:33:45.256537] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:24:17.388 [2024-11-26 07:33:45.256588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.388 [2024-11-26 07:33:45.324646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.388 [2024-11-26 07:33:45.366769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.388 [2024-11-26 07:33:45.366809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.388 [2024-11-26 07:33:45.366816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.388 [2024-11-26 07:33:45.366822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.388 [2024-11-26 07:33:45.366826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.388 [2024-11-26 07:33:45.368418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.388 [2024-11-26 07:33:45.368515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.388 [2024-11-26 07:33:45.368594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.388 [2024-11-26 07:33:45.368595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.388 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.388 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:17.388 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:17.647 [2024-11-26 07:33:45.641936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.647 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:17.647 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.647 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.647 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:17.906 Malloc1 00:24:17.906 07:33:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:18.166 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:18.425 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:18.684 [2024-11-26 07:33:46.524504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:18.684 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:18.952 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:18.952 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.952 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.952 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:18.952 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:18.952 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:18.952 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:18.952 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:18.952 07:33:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:19.211 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:19.211 fio-3.35 00:24:19.211 Starting 1 thread 00:24:21.747 [2024-11-26 07:33:49.378131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105d3d0 is same with the state(6) to be set 00:24:21.747 [2024-11-26 07:33:49.378179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105d3d0 is same with the state(6) to be set 00:24:21.747 [2024-11-26 07:33:49.378188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105d3d0 is same with the state(6) to be set 00:24:21.747 [2024-11-26 07:33:49.378195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105d3d0 is same with the state(6) to be set 00:24:21.747 00:24:21.747 test: (groupid=0, jobs=1): err= 0: pid=828628: Tue Nov 26 07:33:49 2024 00:24:21.747 read: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(91.2MiB/2005msec) 00:24:21.747 slat (nsec): min=1589, max=242849, avg=1742.09, stdev=2218.99 00:24:21.747 clat (usec): min=3170, max=10922, avg=6090.95, stdev=443.15 00:24:21.747 lat (usec): min=3203, max=10924, avg=6092.69, stdev=443.06 00:24:21.747 clat percentiles (usec): 00:24:21.747 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 00:24:21.747 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:24:21.747 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:24:21.747 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8455], 99.95th=[ 9372], 00:24:21.747 | 99.99th=[10945] 00:24:21.747 bw ( KiB/s): min=45800, max=47064, per=99.95%, avg=46546.00, stdev=556.08, samples=4 00:24:21.747 iops : min=11450, max=11766, avg=11636.50, stdev=139.02, samples=4 00:24:21.747 write: IOPS=11.6k, BW=45.2MiB/s (47.3MB/s)(90.5MiB/2005msec); 0 zone resets 00:24:21.747 slat (nsec): min=1625, max=243672, avg=1813.37, stdev=1782.95 00:24:21.747 clat (usec): min=2463, max=9207, avg=4904.90, stdev=370.46 00:24:21.747 lat (usec): min=2479, max=9208, avg=4906.71, stdev=370.48 00:24:21.747 clat percentiles (usec): 00:24:21.747 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4621], 00:24:21.747 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 5014], 00:24:21.747 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:24:21.747 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 7177], 99.95th=[ 8094], 00:24:21.747 | 99.99th=[ 8586] 00:24:21.747 bw ( KiB/s): min=45824, max=46592, per=100.00%, avg=46242.00, stdev=316.00, samples=4 00:24:21.747 iops : min=11456, max=11648, avg=11560.50, stdev=79.00, samples=4 00:24:21.747 lat (msec) : 4=0.41%, 10=99.57%, 20=0.02% 00:24:21.747 cpu : usr=72.21%, sys=26.60%, ctx=56, majf=0, minf=3 00:24:21.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:21.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:21.747 issued rwts: total=23344,23177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.747 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:21.747 00:24:21.747 Run status group 0 (all jobs): 00:24:21.747 READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=91.2MiB (95.6MB), run=2005-2005msec 00:24:21.747 WRITE: bw=45.2MiB/s (47.3MB/s), 45.2MiB/s-45.2MiB/s (47.3MB/s-47.3MB/s), io=90.5MiB (94.9MB), run=2005-2005msec 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:21.747 07:33:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:21.747 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:21.747 fio-3.35 00:24:21.747 Starting 1 thread 00:24:24.282 00:24:24.282 test: (groupid=0, jobs=1): err= 0: pid=829202: Tue Nov 26 07:33:52 2024 00:24:24.282 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(341MiB/2007msec) 00:24:24.282 slat (nsec): min=2538, max=88509, avg=2856.34, stdev=1331.35 00:24:24.282 clat (usec): min=1630, max=13298, avg=6746.52, stdev=1473.30 00:24:24.282 lat (usec): min=1633, max=13312, avg=6749.38, stdev=1473.45 00:24:24.282 clat percentiles (usec): 00:24:24.282 | 1.00th=[ 3752], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5407], 00:24:24.282 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7308], 00:24:24.282 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[ 9110], 00:24:24.282 | 99.00th=[10290], 99.50th=[10683], 99.90th=[11600], 99.95th=[12780], 00:24:24.282 | 99.99th=[13304] 00:24:24.282 bw ( KiB/s): min=76608, max=104271, per=50.15%, avg=87203.75, stdev=11924.74, samples=4 00:24:24.282 iops : min= 4788, max= 6516, avg=5450.00, stdev=744.85, samples=4 00:24:24.282 write: IOPS=6584, BW=103MiB/s (108MB/s)(178MiB/1730msec); 0 zone resets 00:24:24.282 slat (usec): min=30, max=382, avg=32.08, stdev= 7.61 00:24:24.282 clat (usec): min=4917, max=15396, avg=8820.99, stdev=1530.14 00:24:24.282 lat (usec): min=4948, max=15428, avg=8853.07, stdev=1531.59 00:24:24.282 clat percentiles (usec): 00:24:24.282 | 1.00th=[ 6063], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7570], 00:24:24.282 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8979], 00:24:24.282 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10945], 95.00th=[11600], 00:24:24.282 | 99.00th=[12780], 99.50th=[13829], 99.90th=[15008], 99.95th=[15139], 00:24:24.282 | 99.99th=[15270] 00:24:24.282 bw ( KiB/s): min=79872, max=108295, per=86.05%, avg=90649.75, stdev=12301.44, samples=4 00:24:24.282 iops : min= 4992, max= 6768, avg=5665.50, stdev=768.63, samples=4 00:24:24.282 lat (msec) : 2=0.03%, 4=1.40%, 10=90.23%, 20=8.34% 00:24:24.282 cpu : usr=86.19%, sys=13.01%, ctx=42, majf=0, minf=4 00:24:24.282 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:24.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:24.282 issued rwts: total=21811,11391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:24.282 00:24:24.282 Run status group 0 (all jobs): 00:24:24.282 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=341MiB (357MB), run=2007-2007msec 00:24:24.282 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=178MiB (187MB), run=1730-1730msec 00:24:24.282 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:24.282 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:24.282 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:24.282 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:24.282 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:24.282 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:24.282 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:24.282 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.282 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:24.282 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.282 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.282 rmmod nvme_tcp 00:24:24.542 rmmod nvme_fabrics 00:24:24.542 rmmod nvme_keyring 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 828248 ']' 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 828248 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 828248 ']' 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 828248 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 828248 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 828248' 00:24:24.542 killing process with pid 828248 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 828248 00:24:24.542 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 828248 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.802 07:33:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.709 07:33:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.709 00:24:26.709 real 0m15.530s 00:24:26.709 user 0m46.168s 00:24:26.709 sys 0m6.413s 00:24:26.709 07:33:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.709 07:33:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.709 ************************************ 00:24:26.709 END TEST nvmf_fio_host 00:24:26.709 ************************************ 00:24:26.709 07:33:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:26.709 07:33:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:26.709 07:33:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.709 07:33:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.709 ************************************ 00:24:26.709 START TEST nvmf_failover 00:24:26.709 ************************************ 00:24:26.709 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:26.969 * Looking for test storage... 00:24:26.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.969 --rc genhtml_branch_coverage=1 00:24:26.969 --rc genhtml_function_coverage=1 00:24:26.969 --rc genhtml_legend=1 00:24:26.969 --rc geninfo_all_blocks=1 00:24:26.969 --rc geninfo_unexecuted_blocks=1 00:24:26.969 00:24:26.969 ' 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.969 --rc genhtml_branch_coverage=1 00:24:26.969 --rc genhtml_function_coverage=1 00:24:26.969 --rc genhtml_legend=1 00:24:26.969 --rc geninfo_all_blocks=1 00:24:26.969 --rc geninfo_unexecuted_blocks=1 00:24:26.969 00:24:26.969 ' 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.969 --rc genhtml_branch_coverage=1 00:24:26.969 --rc genhtml_function_coverage=1 00:24:26.969 --rc genhtml_legend=1 00:24:26.969 --rc geninfo_all_blocks=1 00:24:26.969 --rc geninfo_unexecuted_blocks=1 00:24:26.969 00:24:26.969 ' 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.969 --rc genhtml_branch_coverage=1 00:24:26.969 --rc genhtml_function_coverage=1 00:24:26.969 --rc genhtml_legend=1 00:24:26.969 --rc geninfo_all_blocks=1 00:24:26.969 --rc geninfo_unexecuted_blocks=1 00:24:26.969 00:24:26.969 ' 00:24:26.969 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.970 07:33:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:32.249 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:32.249 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:32.249 Found net devices under 0000:86:00.0: cvl_0_0 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:32.249 Found net devices under 0000:86:00.1: cvl_0_1 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.249 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:32.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:24:32.250 00:24:32.250 --- 10.0.0.2 ping statistics --- 00:24:32.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.250 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:24:32.250 00:24:32.250 --- 10.0.0.1 ping statistics --- 00:24:32.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.250 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=832952 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 832952 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 832952 ']' 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.250 07:33:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:32.250 [2024-11-26 07:33:59.986501] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:24:32.250 [2024-11-26 07:33:59.986548] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.250 [2024-11-26 07:34:00.054537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:32.250 [2024-11-26 07:34:00.099374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.250 [2024-11-26 07:34:00.099411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.250 [2024-11-26 07:34:00.099418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.250 [2024-11-26 07:34:00.099425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.250 [2024-11-26 07:34:00.099431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.250 [2024-11-26 07:34:00.100868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.250 [2024-11-26 07:34:00.100964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.250 [2024-11-26 07:34:00.100969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.250 07:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.250 07:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:32.250 07:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:32.250 07:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.250 07:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:32.250 07:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.250 07:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:32.509 [2024-11-26 07:34:00.413756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.509 07:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:32.768 Malloc0 00:24:32.768 07:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.768 07:34:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.027 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.286 [2024-11-26 07:34:01.206565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.286 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:33.545 [2024-11-26 07:34:01.399084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:33.545 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:33.545 [2024-11-26 07:34:01.595713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:33.545 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=833286 00:24:33.545 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:33.545 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.545 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 833286 /var/tmp/bdevperf.sock 00:24:33.545 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 833286 ']' 00:24:33.545 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.545 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.545 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.545 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.545 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:33.804 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.804 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:33.804 07:34:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:34.372 NVMe0n1 00:24:34.372 07:34:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:34.631 00:24:34.631 07:34:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:34.631 07:34:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=833433 00:24:34.631 07:34:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:35.566 07:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.824 07:34:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:39.104 07:34:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:39.104 00:24:39.104 07:34:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.363 [2024-11-26 07:34:07.264331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483060 is same with the state(6) to be set 00:24:39.363 [2024-11-26 07:34:07.264382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483060 is same with the state(6) to be set 00:24:39.363 [2024-11-26 07:34:07.264390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483060 is same with the state(6) to be set 00:24:39.363 [2024-11-26 07:34:07.264397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483060 is same with the state(6) to be set 00:24:39.363 [2024-11-26 07:34:07.264409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483060 is same with the state(6) to be set 00:24:39.363 [2024-11-26 07:34:07.264415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483060 is same with the state(6) to be set 00:24:39.363 [2024-11-26 07:34:07.264421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483060 is same with the state(6) to be set 00:24:39.363 [2024-11-26 07:34:07.264427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483060 is same with the state(6) to be set 00:24:39.363 [2024-11-26 07:34:07.264433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483060 is same with the state(6) to be set 00:24:39.363 07:34:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:42.661 07:34:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.661 [2024-11-26 07:34:10.484466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.661 07:34:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:43.598 07:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:43.858 [2024-11-26 07:34:11.699007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.858 [2024-11-26 07:34:11.699115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 [2024-11-26 07:34:11.699354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2483e30 is same with the state(6) to be set 00:24:43.859 07:34:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 833433 00:24:50.444 { 00:24:50.444 "results": [ 00:24:50.444 { 00:24:50.444 "job": "NVMe0n1", 00:24:50.444 "core_mask": "0x1", 00:24:50.444 "workload": "verify", 00:24:50.444 "status": "finished", 00:24:50.444 "verify_range": { 00:24:50.444 "start": 0, 00:24:50.444 "length": 16384 00:24:50.444 }, 00:24:50.444 "queue_depth": 128, 00:24:50.444 "io_size": 4096, 00:24:50.444 "runtime": 15.012683, 00:24:50.444 "iops": 10822.649089439908, 00:24:50.444 "mibps": 42.27597300562464, 00:24:50.444 "io_failed": 16341, 00:24:50.444 "io_timeout": 0, 00:24:50.444 "avg_latency_us": 10724.209305259124, 00:24:50.444 "min_latency_us": 425.62782608695653, 00:24:50.444 "max_latency_us": 21883.325217391306 00:24:50.444 } 00:24:50.444 ], 00:24:50.444 "core_count": 1 00:24:50.444 } 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 833286 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 833286 ']' 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 833286 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833286 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833286' 00:24:50.444 killing process with pid 833286 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 833286 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 833286 00:24:50.444 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:50.444 [2024-11-26 07:34:01.668567] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:24:50.444 [2024-11-26 07:34:01.668624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833286 ] 00:24:50.444 [2024-11-26 07:34:01.734171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.444 [2024-11-26 07:34:01.777586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.444 Running I/O for 15 seconds... 00:24:50.444 10979.00 IOPS, 42.89 MiB/s [2024-11-26T06:34:18.544Z] [2024-11-26 07:34:03.745125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.444 [2024-11-26 07:34:03.745164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.444 [2024-11-26 07:34:03.745180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.444 [2024-11-26 07:34:03.745188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.444 [2024-11-26 07:34:03.745197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.444 [2024-11-26 07:34:03.745204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.444 [2024-11-26 07:34:03.745213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.444 [2024-11-26 07:34:03.745219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.444 [2024-11-26 07:34:03.745227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.444 [2024-11-26 07:34:03.745234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.444 [2024-11-26 07:34:03.745242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.444 [2024-11-26 07:34:03.745249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.444 [2024-11-26 07:34:03.745257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.445 [2024-11-26 07:34:03.745843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.445 [2024-11-26 07:34:03.745849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.745857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.745863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.745871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.745878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.745886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.745892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.745906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.745912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.745921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.745927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.745935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.745942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.745954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.745963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.745971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.745978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.745986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.745992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.446 [2024-11-26 07:34:03.746007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.446 [2024-11-26 07:34:03.746418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.446 [2024-11-26 07:34:03.746432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.446 [2024-11-26 07:34:03.746441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.447 [2024-11-26 07:34:03.746655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.746991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.447 [2024-11-26 07:34:03.746997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.747017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.447 [2024-11-26 07:34:03.747024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98160 len:8 PRP1 0x0 PRP2 0x0 00:24:50.447 [2024-11-26 07:34:03.747031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.447 [2024-11-26 07:34:03.747041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.447 [2024-11-26 07:34:03.747048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.448 [2024-11-26 07:34:03.747054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98168 len:8 PRP1 0x0 PRP2 0x0 00:24:50.448 [2024-11-26 07:34:03.747060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:03.747067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.448 [2024-11-26 07:34:03.747072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.448 [2024-11-26 07:34:03.747078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98176 len:8 PRP1 0x0 PRP2 0x0 00:24:50.448 [2024-11-26 07:34:03.747084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:03.747091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.448 [2024-11-26 07:34:03.747097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.448 [2024-11-26 07:34:03.747103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98184 len:8 PRP1 0x0 PRP2 0x0 00:24:50.448 [2024-11-26 07:34:03.747110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:03.747116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.448 [2024-11-26 07:34:03.747121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.448 [2024-11-26 07:34:03.747126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98192 len:8 PRP1 0x0 PRP2 0x0 00:24:50.448 [2024-11-26 07:34:03.747133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:03.747175] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:50.448 [2024-11-26 07:34:03.747197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.448 [2024-11-26 07:34:03.747206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:03.747215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.448 [2024-11-26 07:34:03.747221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:03.747229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.448 [2024-11-26 07:34:03.747235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:03.747243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.448 [2024-11-26 07:34:03.747250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:03.747263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:50.448 [2024-11-26 07:34:03.750090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:50.448 [2024-11-26 07:34:03.750119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9b360 (9): Bad file descriptor 00:24:50.448 [2024-11-26 07:34:03.906488] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:50.448 10232.00 IOPS, 39.97 MiB/s [2024-11-26T06:34:18.548Z] 10543.67 IOPS, 41.19 MiB/s [2024-11-26T06:34:18.548Z] 10711.50 IOPS, 41.84 MiB/s [2024-11-26T06:34:18.548Z] [2024-11-26 07:34:07.266726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.448 [2024-11-26 07:34:07.266761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.448 [2024-11-26 07:34:07.266784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.448 [2024-11-26 07:34:07.266801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.448 [2024-11-26 07:34:07.266816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.448 [2024-11-26 07:34:07.266831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.448 [2024-11-26 07:34:07.266846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.448 [2024-11-26 07:34:07.266862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.448 [2024-11-26 07:34:07.266877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.448 [2024-11-26 07:34:07.266893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.448 [2024-11-26 07:34:07.266907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.448 [2024-11-26 07:34:07.266923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.266938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.266965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.266981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.266989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.266995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.267003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.267010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.267018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.267024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.267034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.267041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.267049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.267055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.267064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.267070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.267078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.267084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.267092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.267099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.267107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.267113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.267121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.267128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.267136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.267142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.448 [2024-11-26 07:34:07.267151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.448 [2024-11-26 07:34:07.267158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.449 [2024-11-26 07:34:07.267644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.449 [2024-11-26 07:34:07.267667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64696 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64704 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64712 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64720 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64728 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64736 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64744 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64752 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64760 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64768 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64776 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64784 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64792 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.267980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.267986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.267991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.267997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64800 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.268003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.268011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.268016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.268021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64808 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.268028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.268035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.268039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.268045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64816 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.268051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.268058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.268063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.268068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64824 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.268074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.268081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.268086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.268092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64832 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.268098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.268106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.268111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.268116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64840 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.268123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.268130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.268137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.268142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64848 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.268149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.268156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.268161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.268166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64856 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.268172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.268179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.268184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.268191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64864 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.268197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.268204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.268208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.268214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64872 len:8 PRP1 0x0 PRP2 0x0 00:24:50.450 [2024-11-26 07:34:07.268220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.450 [2024-11-26 07:34:07.268227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.450 [2024-11-26 07:34:07.268232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.450 [2024-11-26 07:34:07.268237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64880 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64888 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64896 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64904 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64912 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64920 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64928 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64936 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64944 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64952 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64960 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64968 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64976 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64984 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64992 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65000 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65008 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65016 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65024 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65032 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65040 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65048 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65056 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.451 [2024-11-26 07:34:07.268784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.451 [2024-11-26 07:34:07.268788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.451 [2024-11-26 07:34:07.268794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65064 len:8 PRP1 0x0 PRP2 0x0 00:24:50.451 [2024-11-26 07:34:07.268800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.268807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.268812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.268818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65072 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.268824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.268830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.268836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.268841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65080 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.268848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.268859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65088 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65096 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65104 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65112 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65120 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65128 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65136 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65144 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65152 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65160 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65168 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65176 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65184 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65192 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65200 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65208 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65216 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65224 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65232 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.279968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.452 [2024-11-26 07:34:07.279973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.452 [2024-11-26 07:34:07.279979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65240 len:8 PRP1 0x0 PRP2 0x0 00:24:50.452 [2024-11-26 07:34:07.279986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.280028] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:50.452 [2024-11-26 07:34:07.280053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.452 [2024-11-26 07:34:07.280061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.452 [2024-11-26 07:34:07.280069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.453 [2024-11-26 07:34:07.280076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:07.280083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.453 [2024-11-26 07:34:07.280089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:07.280097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.453 [2024-11-26 07:34:07.280104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:07.280111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:50.453 [2024-11-26 07:34:07.280145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9b360 (9): Bad file descriptor 00:24:50.453 [2024-11-26 07:34:07.283033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:50.453 [2024-11-26 07:34:07.440646] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:50.453 10420.00 IOPS, 40.70 MiB/s [2024-11-26T06:34:18.553Z] 10549.83 IOPS, 41.21 MiB/s [2024-11-26T06:34:18.553Z] 10626.43 IOPS, 41.51 MiB/s [2024-11-26T06:34:18.553Z] 10697.12 IOPS, 41.79 MiB/s [2024-11-26T06:34:18.553Z] 10747.00 IOPS, 41.98 MiB/s [2024-11-26T06:34:18.553Z] [2024-11-26 07:34:11.699853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.699885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.699901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.699909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.699918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.699925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.699933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.699940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.699955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.699962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.699970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.699977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.699986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.699993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.453 [2024-11-26 07:34:11.700255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.453 [2024-11-26 07:34:11.700322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.453 [2024-11-26 07:34:11.700328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.454 [2024-11-26 07:34:11.700596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.454 [2024-11-26 07:34:11.700902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.454 [2024-11-26 07:34:11.700909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.700917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.700924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.700932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.700938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.700946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.700956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.700964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.700971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.700979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.700985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.700994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.455 [2024-11-26 07:34:11.701309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.455 [2024-11-26 07:34:11.701353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110280 len:8 PRP1 0x0 PRP2 0x0 00:24:50.455 [2024-11-26 07:34:11.701360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.455 [2024-11-26 07:34:11.701375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.455 [2024-11-26 07:34:11.701381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110288 len:8 PRP1 0x0 PRP2 0x0 00:24:50.455 [2024-11-26 07:34:11.701387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.455 [2024-11-26 07:34:11.701399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.455 [2024-11-26 07:34:11.701405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110296 len:8 PRP1 0x0 PRP2 0x0 00:24:50.455 [2024-11-26 07:34:11.701413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.455 [2024-11-26 07:34:11.701425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.455 [2024-11-26 07:34:11.701430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110304 len:8 PRP1 0x0 PRP2 0x0 00:24:50.455 [2024-11-26 07:34:11.701437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.455 [2024-11-26 07:34:11.701448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.455 [2024-11-26 07:34:11.701453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110312 len:8 PRP1 0x0 PRP2 0x0 00:24:50.455 [2024-11-26 07:34:11.701460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.455 [2024-11-26 07:34:11.701471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.455 [2024-11-26 07:34:11.701477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110320 len:8 PRP1 0x0 PRP2 0x0 00:24:50.455 [2024-11-26 07:34:11.701483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.455 [2024-11-26 07:34:11.701496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.455 [2024-11-26 07:34:11.701501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110328 len:8 PRP1 0x0 PRP2 0x0 00:24:50.455 [2024-11-26 07:34:11.701508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.455 [2024-11-26 07:34:11.701514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.455 [2024-11-26 07:34:11.701520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.455 [2024-11-26 07:34:11.701525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110336 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110344 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110352 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110360 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110368 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110376 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110384 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110392 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110400 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110408 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110416 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110424 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110432 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110440 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.701863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110448 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.701870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.701877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.701881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.712221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110456 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.712235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.712246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.712253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.712261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110464 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.712269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.712278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.712285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.712292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110472 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.712300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.712311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.712317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.712325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110480 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.712336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.712345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.712352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.712359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110488 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.712368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.712377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.712383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.712390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110496 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.712399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.712408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.712415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.712422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110504 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.712430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.712439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.712446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.712453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110512 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.712461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.456 [2024-11-26 07:34:11.712470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.456 [2024-11-26 07:34:11.712477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.456 [2024-11-26 07:34:11.712484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110520 len:8 PRP1 0x0 PRP2 0x0 00:24:50.456 [2024-11-26 07:34:11.712493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.457 [2024-11-26 07:34:11.712502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.457 [2024-11-26 07:34:11.712508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.457 [2024-11-26 07:34:11.712515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109872 len:8 PRP1 0x0 PRP2 0x0 00:24:50.457 [2024-11-26 07:34:11.712524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.457 [2024-11-26 07:34:11.712532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.457 [2024-11-26 07:34:11.712539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.457 [2024-11-26 07:34:11.712547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109880 len:8 PRP1 0x0 PRP2 0x0 00:24:50.457 [2024-11-26 07:34:11.712555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.457 [2024-11-26 07:34:11.712604] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:50.457 [2024-11-26 07:34:11.712637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.457 [2024-11-26 07:34:11.712648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.457 [2024-11-26 07:34:11.712658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.457 [2024-11-26 07:34:11.712667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.457 [2024-11-26 07:34:11.712677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.457 [2024-11-26 07:34:11.712685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.457 [2024-11-26 07:34:11.712695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.457 [2024-11-26 07:34:11.712704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.457 [2024-11-26 07:34:11.712713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:50.457 [2024-11-26 07:34:11.712752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9b360 (9): Bad file descriptor 00:24:50.457 [2024-11-26 07:34:11.716632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:50.457 [2024-11-26 07:34:11.745975] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:50.457 10719.20 IOPS, 41.87 MiB/s [2024-11-26T06:34:18.557Z] 10766.00 IOPS, 42.05 MiB/s [2024-11-26T06:34:18.557Z] 10795.08 IOPS, 42.17 MiB/s [2024-11-26T06:34:18.557Z] 10790.77 IOPS, 42.15 MiB/s [2024-11-26T06:34:18.557Z] 10811.79 IOPS, 42.23 MiB/s [2024-11-26T06:34:18.557Z] 10823.47 IOPS, 42.28 MiB/s 00:24:50.457 Latency(us) 00:24:50.457 [2024-11-26T06:34:18.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.457 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:50.457 Verification LBA range: start 0x0 length 0x4000 00:24:50.457 NVMe0n1 : 15.01 10822.65 42.28 1088.48 0.00 10724.21 425.63 21883.33 00:24:50.457 [2024-11-26T06:34:18.557Z] =================================================================================================================== 00:24:50.457 [2024-11-26T06:34:18.557Z] Total : 10822.65 42.28 1088.48 0.00 10724.21 425.63 21883.33 00:24:50.457 Received shutdown signal, test time was about 15.000000 seconds 00:24:50.457 00:24:50.457 Latency(us) 00:24:50.457 [2024-11-26T06:34:18.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.457 [2024-11-26T06:34:18.557Z] =================================================================================================================== 00:24:50.457 [2024-11-26T06:34:18.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=835955 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 835955 /var/tmp/bdevperf.sock 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 835955 ']' 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.457 07:34:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:50.457 07:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.457 07:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:50.457 07:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:50.457 [2024-11-26 07:34:18.354872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:50.457 07:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:50.716 [2024-11-26 07:34:18.559488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:50.716 07:34:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:50.975 NVMe0n1 00:24:50.975 07:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:51.235 00:24:51.235 07:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:51.495 00:24:51.495 07:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:51.495 07:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:51.755 07:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:52.013 07:34:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:55.302 07:34:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.302 07:34:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:55.302 07:34:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:55.302 07:34:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=836873 00:24:55.302 07:34:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 836873 00:24:56.239 { 00:24:56.239 "results": [ 00:24:56.239 { 00:24:56.239 "job": "NVMe0n1", 00:24:56.239 "core_mask": "0x1", 00:24:56.239 "workload": "verify", 00:24:56.239 "status": "finished", 00:24:56.239 "verify_range": { 00:24:56.239 "start": 0, 00:24:56.239 "length": 16384 00:24:56.239 }, 00:24:56.239 "queue_depth": 128, 00:24:56.239 "io_size": 4096, 00:24:56.239 "runtime": 1.006964, 00:24:56.239 "iops": 11169.217568850525, 00:24:56.239 "mibps": 43.62975612832236, 00:24:56.239 "io_failed": 0, 00:24:56.239 "io_timeout": 0, 00:24:56.239 "avg_latency_us": 11408.265952273265, 00:24:56.239 "min_latency_us": 2364.9947826086955, 00:24:56.239 "max_latency_us": 15272.737391304348 00:24:56.239 } 00:24:56.239 ], 00:24:56.239 "core_count": 1 00:24:56.239 } 00:24:56.239 07:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:56.239 [2024-11-26 07:34:17.983651] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:24:56.239 [2024-11-26 07:34:17.983705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835955 ] 00:24:56.239 [2024-11-26 07:34:18.047032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.239 [2024-11-26 07:34:18.085175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.239 [2024-11-26 07:34:19.941723] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:56.239 [2024-11-26 07:34:19.941769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.239 [2024-11-26 07:34:19.941781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.239 [2024-11-26 07:34:19.941789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.239 [2024-11-26 07:34:19.941797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.239 [2024-11-26 07:34:19.941804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.239 [2024-11-26 07:34:19.941811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.239 [2024-11-26 07:34:19.941818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.239 [2024-11-26 07:34:19.941825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.239 [2024-11-26 07:34:19.941832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:56.239 [2024-11-26 07:34:19.941857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:56.239 [2024-11-26 07:34:19.941871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e5360 (9): Bad file descriptor 00:24:56.239 [2024-11-26 07:34:19.991007] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:56.239 Running I/O for 1 seconds... 00:24:56.239 11119.00 IOPS, 43.43 MiB/s 00:24:56.239 Latency(us) 00:24:56.239 [2024-11-26T06:34:24.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.239 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:56.239 Verification LBA range: start 0x0 length 0x4000 00:24:56.239 NVMe0n1 : 1.01 11169.22 43.63 0.00 0.00 11408.27 2364.99 15272.74 00:24:56.239 [2024-11-26T06:34:24.339Z] =================================================================================================================== 00:24:56.239 [2024-11-26T06:34:24.339Z] Total : 11169.22 43.63 0.00 0.00 11408.27 2364.99 15272.74 00:24:56.240 07:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:56.240 07:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:56.499 07:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.757 07:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:56.757 07:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:57.016 07:34:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:57.275 07:34:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 835955 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 835955 ']' 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 835955 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 835955 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 835955' 00:25:00.565 killing process with pid 835955 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 835955 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 835955 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:00.565 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.825 rmmod nvme_tcp 00:25:00.825 rmmod nvme_fabrics 00:25:00.825 rmmod nvme_keyring 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 832952 ']' 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 832952 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 832952 ']' 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 832952 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 832952 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 832952' 00:25:00.825 killing process with pid 832952 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 832952 00:25:00.825 07:34:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 832952 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.084 07:34:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.628 00:25:03.628 real 0m36.327s 00:25:03.628 user 1m57.747s 00:25:03.628 sys 0m7.089s 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:03.628 ************************************ 00:25:03.628 END TEST nvmf_failover 00:25:03.628 ************************************ 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.628 ************************************ 00:25:03.628 START TEST nvmf_host_discovery 00:25:03.628 ************************************ 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:03.628 * Looking for test storage... 00:25:03.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.628 --rc genhtml_branch_coverage=1 00:25:03.628 --rc genhtml_function_coverage=1 00:25:03.628 --rc genhtml_legend=1 00:25:03.628 --rc geninfo_all_blocks=1 00:25:03.628 --rc geninfo_unexecuted_blocks=1 00:25:03.628 00:25:03.628 ' 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.628 --rc genhtml_branch_coverage=1 00:25:03.628 --rc genhtml_function_coverage=1 00:25:03.628 --rc genhtml_legend=1 00:25:03.628 --rc geninfo_all_blocks=1 00:25:03.628 --rc geninfo_unexecuted_blocks=1 00:25:03.628 00:25:03.628 ' 00:25:03.628 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.628 --rc genhtml_branch_coverage=1 00:25:03.628 --rc genhtml_function_coverage=1 00:25:03.628 --rc genhtml_legend=1 00:25:03.628 --rc geninfo_all_blocks=1 00:25:03.628 --rc geninfo_unexecuted_blocks=1 00:25:03.629 00:25:03.629 ' 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.629 --rc genhtml_branch_coverage=1 00:25:03.629 --rc genhtml_function_coverage=1 00:25:03.629 --rc genhtml_legend=1 00:25:03.629 --rc geninfo_all_blocks=1 00:25:03.629 --rc geninfo_unexecuted_blocks=1 00:25:03.629 00:25:03.629 ' 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.629 07:34:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:08.905 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:08.905 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.905 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:08.906 Found net devices under 0000:86:00.0: cvl_0_0 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:08.906 Found net devices under 0000:86:00.1: cvl_0_1 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:25:08.906 00:25:08.906 --- 10.0.0.2 ping statistics --- 00:25:08.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.906 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:25:08.906 00:25:08.906 --- 10.0.0.1 ping statistics --- 00:25:08.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.906 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=841097 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 841097 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 841097 ']' 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.906 [2024-11-26 07:34:36.430346] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:25:08.906 [2024-11-26 07:34:36.430389] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.906 [2024-11-26 07:34:36.491142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.906 [2024-11-26 07:34:36.532810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.906 [2024-11-26 07:34:36.532845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.906 [2024-11-26 07:34:36.532851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.906 [2024-11-26 07:34:36.532858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.906 [2024-11-26 07:34:36.532863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.906 [2024-11-26 07:34:36.533456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.906 [2024-11-26 07:34:36.665413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.906 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.907 [2024-11-26 07:34:36.673579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.907 null0 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.907 null1 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=841125 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 841125 /tmp/host.sock 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 841125 ']' 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:08.907 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.907 [2024-11-26 07:34:36.751216] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:25:08.907 [2024-11-26 07:34:36.751257] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841125 ] 00:25:08.907 [2024-11-26 07:34:36.811685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.907 [2024-11-26 07:34:36.853053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.907 07:34:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.166 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.166 [2024-11-26 07:34:37.259107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:09.424 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:09.425 07:34:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:09.992 [2024-11-26 07:34:38.013128] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:09.992 [2024-11-26 07:34:38.013146] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:09.992 [2024-11-26 07:34:38.013159] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:10.251 [2024-11-26 07:34:38.099426] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:10.251 [2024-11-26 07:34:38.322560] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:10.251 [2024-11-26 07:34:38.323357] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x99bdd0:1 started. 00:25:10.251 [2024-11-26 07:34:38.324731] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:10.251 [2024-11-26 07:34:38.324748] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:10.251 [2024-11-26 07:34:38.330888] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x99bdd0 was disconnected and freed. delete nvme_qpair. 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:10.510 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.511 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.770 [2024-11-26 07:34:38.644778] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x99c1a0:1 started. 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.770 [2024-11-26 07:34:38.651624] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x99c1a0 was disconnected and freed. delete nvme_qpair. 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.770 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.771 [2024-11-26 07:34:38.743366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:10.771 [2024-11-26 07:34:38.744185] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:10.771 [2024-11-26 07:34:38.744205] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.771 [2024-11-26 07:34:38.831451] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.771 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.030 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:11.030 07:34:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:11.030 [2024-11-26 07:34:38.938176] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:11.030 [2024-11-26 07:34:38.938211] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:11.030 [2024-11-26 07:34:38.938219] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:11.030 [2024-11-26 07:34:38.938224] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.971 [2024-11-26 07:34:39.979151] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:11.971 [2024-11-26 07:34:39.979174] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:11.971 [2024-11-26 07:34:39.986781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.971 [2024-11-26 07:34:39.986800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.971 [2024-11-26 07:34:39.986809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.971 [2024-11-26 07:34:39.986816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.971 [2024-11-26 07:34:39.986824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.971 [2024-11-26 07:34:39.986831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.971 [2024-11-26 07:34:39.986838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.971 [2024-11-26 07:34:39.986845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.971 [2024-11-26 07:34:39.986851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c390 is same with the state(6) to be set 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.971 07:34:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:11.971 [2024-11-26 07:34:39.996794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c390 (9): Bad file descriptor 00:25:11.971 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.971 [2024-11-26 07:34:40.006829] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:11.971 [2024-11-26 07:34:40.006841] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:11.971 [2024-11-26 07:34:40.006846] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:11.971 [2024-11-26 07:34:40.006851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:11.971 [2024-11-26 07:34:40.006867] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:11.971 [2024-11-26 07:34:40.007011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.971 [2024-11-26 07:34:40.007026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c390 with addr=10.0.0.2, port=4420 00:25:11.971 [2024-11-26 07:34:40.007034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c390 is same with the state(6) to be set 00:25:11.971 [2024-11-26 07:34:40.007046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c390 (9): Bad file descriptor 00:25:11.971 [2024-11-26 07:34:40.007057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:11.971 [2024-11-26 07:34:40.007063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:11.971 [2024-11-26 07:34:40.007072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:11.971 [2024-11-26 07:34:40.007078] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:11.971 [2024-11-26 07:34:40.007084] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:11.971 [2024-11-26 07:34:40.007088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:11.971 [2024-11-26 07:34:40.016898] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:11.971 [2024-11-26 07:34:40.016909] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:11.971 [2024-11-26 07:34:40.016913] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:11.972 [2024-11-26 07:34:40.016917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:11.972 [2024-11-26 07:34:40.016931] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:11.972 [2024-11-26 07:34:40.017039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.972 [2024-11-26 07:34:40.017051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c390 with addr=10.0.0.2, port=4420 00:25:11.972 [2024-11-26 07:34:40.017059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c390 is same with the state(6) to be set 00:25:11.972 [2024-11-26 07:34:40.017070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c390 (9): Bad file descriptor 00:25:11.972 [2024-11-26 07:34:40.017080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:11.972 [2024-11-26 07:34:40.017086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:11.972 [2024-11-26 07:34:40.017094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:11.972 [2024-11-26 07:34:40.017103] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:11.972 [2024-11-26 07:34:40.017108] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:11.972 [2024-11-26 07:34:40.017112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:11.972 [2024-11-26 07:34:40.026962] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:11.972 [2024-11-26 07:34:40.026975] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:11.972 [2024-11-26 07:34:40.026979] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:11.972 [2024-11-26 07:34:40.026984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:11.972 [2024-11-26 07:34:40.026998] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:11.972 [2024-11-26 07:34:40.027105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.972 [2024-11-26 07:34:40.027117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c390 with addr=10.0.0.2, port=4420 00:25:11.972 [2024-11-26 07:34:40.027125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c390 is same with the state(6) to be set 00:25:11.972 [2024-11-26 07:34:40.027136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c390 (9): Bad file descriptor 00:25:11.972 [2024-11-26 07:34:40.027146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:11.972 [2024-11-26 07:34:40.027153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:11.972 [2024-11-26 07:34:40.027160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:11.972 [2024-11-26 07:34:40.027166] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:11.972 [2024-11-26 07:34:40.027170] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:11.972 [2024-11-26 07:34:40.027174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:11.972 [2024-11-26 07:34:40.037028] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:11.972 [2024-11-26 07:34:40.037040] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:11.972 [2024-11-26 07:34:40.037044] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:11.972 [2024-11-26 07:34:40.037048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:11.972 [2024-11-26 07:34:40.037061] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:11.972 [2024-11-26 07:34:40.037142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.972 [2024-11-26 07:34:40.037154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c390 with addr=10.0.0.2, port=4420 00:25:11.972 [2024-11-26 07:34:40.037161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c390 is same with the state(6) to be set 00:25:11.972 [2024-11-26 07:34:40.037172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c390 (9): Bad file descriptor 00:25:11.972 [2024-11-26 07:34:40.037182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:11.972 [2024-11-26 07:34:40.037188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:11.972 [2024-11-26 07:34:40.037196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:11.972 [2024-11-26 07:34:40.037201] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:11.972 [2024-11-26 07:34:40.037206] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:11.972 [2024-11-26 07:34:40.037211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.972 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.972 [2024-11-26 07:34:40.047092] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:11.972 [2024-11-26 07:34:40.047107] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:11.972 [2024-11-26 07:34:40.047112] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:11.972 [2024-11-26 07:34:40.047116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:11.972 [2024-11-26 07:34:40.047129] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:11.972 [2024-11-26 07:34:40.047309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.972 [2024-11-26 07:34:40.047321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c390 with addr=10.0.0.2, port=4420 00:25:11.972 [2024-11-26 07:34:40.047328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c390 is same with the state(6) to be set 00:25:11.972 [2024-11-26 07:34:40.047339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c390 (9): Bad file descriptor 00:25:11.972 [2024-11-26 07:34:40.047349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:11.972 [2024-11-26 07:34:40.047356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:11.972 [2024-11-26 07:34:40.047363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:11.972 [2024-11-26 07:34:40.047369] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:11.972 [2024-11-26 07:34:40.047374] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:11.972 [2024-11-26 07:34:40.047386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:11.972 [2024-11-26 07:34:40.057160] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:11.972 [2024-11-26 07:34:40.057171] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:11.972 [2024-11-26 07:34:40.057175] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:11.972 [2024-11-26 07:34:40.057179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:11.972 [2024-11-26 07:34:40.057193] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:11.972 [2024-11-26 07:34:40.057319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.972 [2024-11-26 07:34:40.057331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c390 with addr=10.0.0.2, port=4420 00:25:11.972 [2024-11-26 07:34:40.057338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c390 is same with the state(6) to be set 00:25:11.972 [2024-11-26 07:34:40.057349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c390 (9): Bad file descriptor 00:25:11.972 [2024-11-26 07:34:40.057358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:11.972 [2024-11-26 07:34:40.057364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:11.972 [2024-11-26 07:34:40.057371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:11.972 [2024-11-26 07:34:40.057377] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:11.972 [2024-11-26 07:34:40.057382] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:11.972 [2024-11-26 07:34:40.057386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:12.232 [2024-11-26 07:34:40.067223] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:12.232 [2024-11-26 07:34:40.067236] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:12.232 [2024-11-26 07:34:40.067240] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.232 [2024-11-26 07:34:40.067244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.232 [2024-11-26 07:34:40.067258] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:12.232 [2024-11-26 07:34:40.067457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.232 [2024-11-26 07:34:40.067469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c390 with addr=10.0.0.2, port=4420 00:25:12.232 [2024-11-26 07:34:40.067477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c390 is same with the state(6) to be set 00:25:12.232 [2024-11-26 07:34:40.067487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c390 (9): Bad file descriptor 00:25:12.232 [2024-11-26 07:34:40.067497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:12.232 [2024-11-26 07:34:40.067503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:12.232 [2024-11-26 07:34:40.067510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:12.232 [2024-11-26 07:34:40.067516] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:12.232 [2024-11-26 07:34:40.067520] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:12.232 [2024-11-26 07:34:40.067528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:12.232 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.232 [2024-11-26 07:34:40.077289] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:12.232 [2024-11-26 07:34:40.077300] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:12.232 [2024-11-26 07:34:40.077304] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.232 [2024-11-26 07:34:40.077308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.232 [2024-11-26 07:34:40.077321] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:12.232 [2024-11-26 07:34:40.077489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.232 [2024-11-26 07:34:40.077501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c390 with addr=10.0.0.2, port=4420 00:25:12.232 [2024-11-26 07:34:40.077508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c390 is same with the state(6) to be set 00:25:12.232 [2024-11-26 07:34:40.077518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c390 (9): Bad file descriptor 00:25:12.232 [2024-11-26 07:34:40.077528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:12.232 [2024-11-26 07:34:40.077534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:12.232 [2024-11-26 07:34:40.077540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:12.232 [2024-11-26 07:34:40.077545] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:12.232 [2024-11-26 07:34:40.077550] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:12.232 [2024-11-26 07:34:40.077554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:12.232 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:12.233 [2024-11-26 07:34:40.087352] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:12.233 [2024-11-26 07:34:40.087363] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:12.233 [2024-11-26 07:34:40.087367] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.233 [2024-11-26 07:34:40.087371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.233 [2024-11-26 07:34:40.087384] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:12.233 [2024-11-26 07:34:40.087491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.233 [2024-11-26 07:34:40.087502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c390 with addr=10.0.0.2, port=4420 00:25:12.233 [2024-11-26 07:34:40.087509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c390 is same with the state(6) to be set 00:25:12.233 [2024-11-26 07:34:40.087519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c390 (9): Bad file descriptor 00:25:12.233 [2024-11-26 07:34:40.087528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:12.233 [2024-11-26 07:34:40.087534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:12.233 [2024-11-26 07:34:40.087540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:12.233 [2024-11-26 07:34:40.087546] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:12.233 [2024-11-26 07:34:40.087550] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:12.233 [2024-11-26 07:34:40.087554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.233 [2024-11-26 07:34:40.097416] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:12.233 [2024-11-26 07:34:40.097428] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:12.233 [2024-11-26 07:34:40.097433] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:12.233 [2024-11-26 07:34:40.097436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:12.233 [2024-11-26 07:34:40.097450] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:12.233 [2024-11-26 07:34:40.097637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.233 [2024-11-26 07:34:40.097650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c390 with addr=10.0.0.2, port=4420 00:25:12.233 [2024-11-26 07:34:40.097657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c390 is same with the state(6) to be set 00:25:12.233 [2024-11-26 07:34:40.097668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c390 (9): Bad file descriptor 00:25:12.233 [2024-11-26 07:34:40.097677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:12.233 [2024-11-26 07:34:40.097684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:12.233 [2024-11-26 07:34:40.097691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:12.233 [2024-11-26 07:34:40.097697] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:12.233 [2024-11-26 07:34:40.097701] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:12.233 [2024-11-26 07:34:40.097708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.233 [2024-11-26 07:34:40.105855] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:12.233 [2024-11-26 07:34:40.105872] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:12.233 07:34:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:13.171 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.430 07:34:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.368 [2024-11-26 07:34:42.433431] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:14.368 [2024-11-26 07:34:42.433447] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:14.368 [2024-11-26 07:34:42.433457] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:14.627 [2024-11-26 07:34:42.520728] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:14.627 [2024-11-26 07:34:42.626476] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:14.627 [2024-11-26 07:34:42.627080] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x9691d0:1 started. 00:25:14.627 [2024-11-26 07:34:42.628734] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:14.628 [2024-11-26 07:34:42.628759] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:14.628 [2024-11-26 07:34:42.632178] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x9691d0 was disconnected and freed. delete nvme_qpair. 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.628 request: 00:25:14.628 { 00:25:14.628 "name": "nvme", 00:25:14.628 "trtype": "tcp", 00:25:14.628 "traddr": "10.0.0.2", 00:25:14.628 "adrfam": "ipv4", 00:25:14.628 "trsvcid": "8009", 00:25:14.628 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:14.628 "wait_for_attach": true, 00:25:14.628 "method": "bdev_nvme_start_discovery", 00:25:14.628 "req_id": 1 00:25:14.628 } 00:25:14.628 Got JSON-RPC error response 00:25:14.628 response: 00:25:14.628 { 00:25:14.628 "code": -17, 00:25:14.628 "message": "File exists" 00:25:14.628 } 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:14.628 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.888 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.888 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:14.888 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.888 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:14.888 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.888 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:14.888 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.889 request: 00:25:14.889 { 00:25:14.889 "name": "nvme_second", 00:25:14.889 "trtype": "tcp", 00:25:14.889 "traddr": "10.0.0.2", 00:25:14.889 "adrfam": "ipv4", 00:25:14.889 "trsvcid": "8009", 00:25:14.889 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:14.889 "wait_for_attach": true, 00:25:14.889 "method": "bdev_nvme_start_discovery", 00:25:14.889 "req_id": 1 00:25:14.889 } 00:25:14.889 Got JSON-RPC error response 00:25:14.889 response: 00:25:14.889 { 00:25:14.889 "code": -17, 00:25:14.889 "message": "File exists" 00:25:14.889 } 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.889 07:34:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.909 [2024-11-26 07:34:43.864440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.909 [2024-11-26 07:34:43.864468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x986420 with addr=10.0.0.2, port=8010 00:25:15.909 [2024-11-26 07:34:43.864482] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:15.909 [2024-11-26 07:34:43.864488] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:15.909 [2024-11-26 07:34:43.864495] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:17.002 [2024-11-26 07:34:44.866889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.003 [2024-11-26 07:34:44.866914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x986420 with addr=10.0.0.2, port=8010 00:25:17.003 [2024-11-26 07:34:44.866926] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:17.003 [2024-11-26 07:34:44.866933] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:17.003 [2024-11-26 07:34:44.866964] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:18.034 [2024-11-26 07:34:45.869046] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:18.034 request: 00:25:18.034 { 00:25:18.034 "name": "nvme_second", 00:25:18.034 "trtype": "tcp", 00:25:18.034 "traddr": "10.0.0.2", 00:25:18.034 "adrfam": "ipv4", 00:25:18.034 "trsvcid": "8010", 00:25:18.034 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:18.034 "wait_for_attach": false, 00:25:18.034 "attach_timeout_ms": 3000, 00:25:18.034 "method": "bdev_nvme_start_discovery", 00:25:18.034 "req_id": 1 00:25:18.034 } 00:25:18.034 Got JSON-RPC error response 00:25:18.034 response: 00:25:18.034 { 00:25:18.034 "code": -110, 00:25:18.034 "message": "Connection timed out" 00:25:18.034 } 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 841125 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:18.034 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.035 rmmod nvme_tcp 00:25:18.035 rmmod nvme_fabrics 00:25:18.035 rmmod nvme_keyring 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 841097 ']' 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 841097 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 841097 ']' 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 841097 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.035 07:34:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 841097 00:25:18.035 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:18.035 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:18.035 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 841097' 00:25:18.035 killing process with pid 841097 00:25:18.035 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 841097 00:25:18.035 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 841097 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.294 07:34:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.201 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.201 00:25:20.201 real 0m17.070s 00:25:20.201 user 0m21.856s 00:25:20.201 sys 0m5.158s 00:25:20.201 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.201 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.201 ************************************ 00:25:20.201 END TEST nvmf_host_discovery 00:25:20.201 ************************************ 00:25:20.201 07:34:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:20.201 07:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:20.201 07:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.201 07:34:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.461 ************************************ 00:25:20.461 START TEST nvmf_host_multipath_status 00:25:20.461 ************************************ 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:20.461 * Looking for test storage... 00:25:20.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:20.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.461 --rc genhtml_branch_coverage=1 00:25:20.461 --rc genhtml_function_coverage=1 00:25:20.461 --rc genhtml_legend=1 00:25:20.461 --rc geninfo_all_blocks=1 00:25:20.461 --rc geninfo_unexecuted_blocks=1 00:25:20.461 00:25:20.461 ' 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:20.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.461 --rc genhtml_branch_coverage=1 00:25:20.461 --rc genhtml_function_coverage=1 00:25:20.461 --rc genhtml_legend=1 00:25:20.461 --rc geninfo_all_blocks=1 00:25:20.461 --rc geninfo_unexecuted_blocks=1 00:25:20.461 00:25:20.461 ' 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:20.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.461 --rc genhtml_branch_coverage=1 00:25:20.461 --rc genhtml_function_coverage=1 00:25:20.461 --rc genhtml_legend=1 00:25:20.461 --rc geninfo_all_blocks=1 00:25:20.461 --rc geninfo_unexecuted_blocks=1 00:25:20.461 00:25:20.461 ' 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:20.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.461 --rc genhtml_branch_coverage=1 00:25:20.461 --rc genhtml_function_coverage=1 00:25:20.461 --rc genhtml_legend=1 00:25:20.461 --rc geninfo_all_blocks=1 00:25:20.461 --rc geninfo_unexecuted_blocks=1 00:25:20.461 00:25:20.461 ' 00:25:20.461 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.462 07:34:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:25.738 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:25.739 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:25.739 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:25.739 Found net devices under 0000:86:00.0: cvl_0_0 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:25.739 Found net devices under 0000:86:00.1: cvl_0_1 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.739 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:25.998 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:25.998 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.998 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:25.998 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:25.998 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.998 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:25.998 07:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:25.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:25:25.998 00:25:25.998 --- 10.0.0.2 ping statistics --- 00:25:25.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.998 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:25:25.998 00:25:25.998 --- 10.0.0.1 ping statistics --- 00:25:25.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.998 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=846339 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 846339 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 846339 ']' 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.998 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:26.258 [2024-11-26 07:34:54.131215] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:25:26.258 [2024-11-26 07:34:54.131260] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.258 [2024-11-26 07:34:54.197364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:26.258 [2024-11-26 07:34:54.239484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.258 [2024-11-26 07:34:54.239520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.258 [2024-11-26 07:34:54.239527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.258 [2024-11-26 07:34:54.239533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.258 [2024-11-26 07:34:54.239538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.258 [2024-11-26 07:34:54.240731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.258 [2024-11-26 07:34:54.240734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.258 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.258 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:26.258 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:26.258 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:26.258 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:26.518 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.518 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=846339 00:25:26.518 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:26.518 [2024-11-26 07:34:54.544255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.518 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:26.777 Malloc0 00:25:26.777 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:27.036 07:34:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:27.295 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:27.295 [2024-11-26 07:34:55.297547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.295 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:27.554 [2024-11-26 07:34:55.486069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:27.554 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:27.554 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=846601 00:25:27.554 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:27.554 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 846601 /var/tmp/bdevperf.sock 00:25:27.554 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 846601 ']' 00:25:27.554 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.554 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.554 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.554 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.554 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:27.812 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.812 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:27.812 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:28.071 07:34:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:28.330 Nvme0n1 00:25:28.331 07:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:28.589 Nvme0n1 00:25:28.849 07:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:28.849 07:34:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:30.754 07:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:30.754 07:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:31.013 07:34:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:31.013 07:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:32.408 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:32.408 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:32.408 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.408 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.408 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.408 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:32.408 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.408 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.667 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.667 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.667 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.667 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.667 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.667 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.667 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.667 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.926 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.926 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:32.926 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.926 07:35:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:33.212 07:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.212 07:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:33.212 07:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.212 07:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.471 07:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.471 07:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:33.471 07:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:33.471 07:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:33.729 07:35:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:34.665 07:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:34.665 07:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:34.665 07:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.665 07:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.923 07:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.923 07:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:34.923 07:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:34.923 07:35:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.181 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.181 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.181 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.181 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.439 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.439 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.439 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.439 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.697 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.697 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.697 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.697 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.697 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.697 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:35.697 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.697 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:35.955 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.955 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:35.955 07:35:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:36.214 07:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:36.472 07:35:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:37.409 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:37.409 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:37.409 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.409 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.668 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.668 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:37.668 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.668 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.927 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.927 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.927 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.927 07:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:38.186 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.186 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:38.186 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:38.186 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.186 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.186 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:38.186 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.186 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.445 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.445 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:38.445 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.445 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.704 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.704 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:38.704 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:38.963 07:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:39.222 07:35:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:40.159 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:40.159 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.159 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.159 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.418 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.418 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:40.418 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.418 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.418 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.418 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.418 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.418 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.678 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.678 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.678 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.678 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.937 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.937 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:40.937 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.937 07:35:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:41.198 07:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.198 07:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:41.198 07:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.198 07:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:41.198 07:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.198 07:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:41.198 07:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:41.457 07:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:41.716 07:35:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:42.653 07:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:42.653 07:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:42.653 07:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.653 07:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.912 07:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.912 07:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:42.912 07:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.913 07:35:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.171 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.171 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.171 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.171 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:43.430 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.431 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:43.431 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.431 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.690 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.690 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:43.690 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.690 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.690 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.690 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:43.690 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:43.690 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.950 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.950 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:43.950 07:35:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:44.208 07:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:44.467 07:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:45.405 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:45.405 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:45.405 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.405 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:45.665 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.665 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:45.665 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.665 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.665 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.665 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.665 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.665 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.924 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.924 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.924 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.924 07:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:46.184 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.184 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:46.184 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.184 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:46.442 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.442 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:46.442 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.442 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.442 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.442 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:46.701 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:46.701 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:46.986 07:35:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:47.245 07:35:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:48.182 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:48.182 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:48.182 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.182 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:48.441 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.441 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:48.441 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.441 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:48.700 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.700 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:48.700 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.700 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:48.700 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.700 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:48.700 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.700 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.959 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.959 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:48.959 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.959 07:35:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:49.218 07:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.218 07:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:49.218 07:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:49.218 07:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.477 07:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.477 07:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:49.477 07:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:49.736 07:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:49.995 07:35:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:50.931 07:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:50.931 07:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:50.931 07:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.931 07:35:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:51.190 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:51.190 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:51.190 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.190 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:51.190 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.190 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:51.190 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.190 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:51.448 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.448 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:51.448 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.448 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:51.707 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.707 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:51.707 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.707 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:51.966 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.966 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:51.966 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.966 07:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:52.224 07:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.224 07:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:52.225 07:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:52.483 07:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:52.483 07:35:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:53.861 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:53.861 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:53.861 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.861 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.861 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.861 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:53.861 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.861 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.120 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.120 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.120 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.120 07:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:54.120 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.120 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:54.120 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.120 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.379 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.379 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:54.379 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.379 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:54.638 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.638 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:54.638 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.638 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.897 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.897 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:54.897 07:35:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:55.156 07:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:55.156 07:35:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:56.534 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:56.534 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:56.534 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.534 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.534 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.534 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:56.534 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.534 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.793 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.793 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.793 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.793 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.793 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.793 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.793 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.793 07:35:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.051 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.051 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:57.051 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.051 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.310 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.310 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:57.310 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:57.310 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 846601 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 846601 ']' 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 846601 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 846601 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 846601' 00:25:57.569 killing process with pid 846601 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 846601 00:25:57.569 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 846601 00:25:57.569 { 00:25:57.569 "results": [ 00:25:57.569 { 00:25:57.569 "job": "Nvme0n1", 00:25:57.569 "core_mask": "0x4", 00:25:57.569 "workload": "verify", 00:25:57.569 "status": "terminated", 00:25:57.569 "verify_range": { 00:25:57.569 "start": 0, 00:25:57.569 "length": 16384 00:25:57.569 }, 00:25:57.569 "queue_depth": 128, 00:25:57.569 "io_size": 4096, 00:25:57.569 "runtime": 28.673352, 00:25:57.569 "iops": 10388.077403716175, 00:25:57.569 "mibps": 40.57842735826631, 00:25:57.569 "io_failed": 0, 00:25:57.569 "io_timeout": 0, 00:25:57.569 "avg_latency_us": 12300.906053413008, 00:25:57.569 "min_latency_us": 395.3530434782609, 00:25:57.569 "max_latency_us": 3078254.4139130437 00:25:57.569 } 00:25:57.569 ], 00:25:57.569 "core_count": 1 00:25:57.569 } 00:25:57.854 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 846601 00:25:57.854 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:57.854 [2024-11-26 07:34:55.537403] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:25:57.854 [2024-11-26 07:34:55.537461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid846601 ] 00:25:57.854 [2024-11-26 07:34:55.596863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.854 [2024-11-26 07:34:55.637651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.854 Running I/O for 90 seconds... 00:25:57.854 11253.00 IOPS, 43.96 MiB/s [2024-11-26T06:35:25.954Z] 11275.00 IOPS, 44.04 MiB/s [2024-11-26T06:35:25.954Z] 11294.00 IOPS, 44.12 MiB/s [2024-11-26T06:35:25.954Z] 11266.25 IOPS, 44.01 MiB/s [2024-11-26T06:35:25.954Z] 11259.60 IOPS, 43.98 MiB/s [2024-11-26T06:35:25.954Z] 11284.33 IOPS, 44.08 MiB/s [2024-11-26T06:35:25.954Z] 11254.71 IOPS, 43.96 MiB/s [2024-11-26T06:35:25.954Z] 11227.62 IOPS, 43.86 MiB/s [2024-11-26T06:35:25.954Z] 11210.89 IOPS, 43.79 MiB/s [2024-11-26T06:35:25.954Z] 11222.90 IOPS, 43.84 MiB/s [2024-11-26T06:35:25.954Z] 11201.18 IOPS, 43.75 MiB/s [2024-11-26T06:35:25.954Z] 11191.83 IOPS, 43.72 MiB/s [2024-11-26T06:35:25.954Z] [2024-11-26 07:35:09.475336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.854 [2024-11-26 07:35:09.475403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.854 [2024-11-26 07:35:09.475424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.854 [2024-11-26 07:35:09.475444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.854 [2024-11-26 07:35:09.475464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.854 [2024-11-26 07:35:09.475483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.854 [2024-11-26 07:35:09.475503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.854 [2024-11-26 07:35:09.475774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.854 [2024-11-26 07:35:09.475781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.475793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.475802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.475814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.475821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.475834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.475841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.475853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.475860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.475872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.475879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.475891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.475898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.475911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.475917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.475930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.475936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.475954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.475961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.475974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.475981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.475993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.855 [2024-11-26 07:35:09.476766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.855 [2024-11-26 07:35:09.476785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.855 [2024-11-26 07:35:09.476804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.855 [2024-11-26 07:35:09.476823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.855 [2024-11-26 07:35:09.476842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.855 [2024-11-26 07:35:09.476861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.855 [2024-11-26 07:35:09.476880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.855 [2024-11-26 07:35:09.476899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.855 [2024-11-26 07:35:09.476933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.855 [2024-11-26 07:35:09.476939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.476957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.476965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.476977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.476983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.476995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.856 [2024-11-26 07:35:09.477593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.477985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.477992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.478005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.856 [2024-11-26 07:35:09.478012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:57.856 [2024-11-26 07:35:09.478025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.478977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.478991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.478998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.479010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.479017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.479030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.479037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.479051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.479058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.479071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.479078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.479091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.857 [2024-11-26 07:35:09.479098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.479111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.479118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.479130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.857 [2024-11-26 07:35:09.479137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.857 [2024-11-26 07:35:09.479150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.479717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.479724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.480130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.480144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.858 [2024-11-26 07:35:09.480159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.858 [2024-11-26 07:35:09.480168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.480181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.480187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.480200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.480207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.480219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.480225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.480238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.480244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.480256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.480263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.480276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.480282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.480294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.480301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.480315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.480322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.480335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.480342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.480354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-26 07:35:09.480361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.480374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-26 07:35:09.490803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.490819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-26 07:35:09.490827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.490842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-26 07:35:09.490849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.490861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-26 07:35:09.490868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.490881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-26 07:35:09.490887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.490900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-26 07:35:09.490907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.490919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-26 07:35:09.490926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.490939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.490945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.490962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.490968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.490981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.490987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.859 [2024-11-26 07:35:09.491536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-26 07:35:09.491556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-26 07:35:09.491576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:57.859 [2024-11-26 07:35:09.491588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-26 07:35:09.491595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.491986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.491998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-26 07:35:09.492256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.860 [2024-11-26 07:35:09.492278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.860 [2024-11-26 07:35:09.492297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.860 [2024-11-26 07:35:09.492317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.860 [2024-11-26 07:35:09.492336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:57.860 [2024-11-26 07:35:09.492349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.860 [2024-11-26 07:35:09.492355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.861 [2024-11-26 07:35:09.492567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.861 [2024-11-26 07:35:09.492587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.861 [2024-11-26 07:35:09.492605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.861 [2024-11-26 07:35:09.492625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.861 [2024-11-26 07:35:09.492644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.861 [2024-11-26 07:35:09.492663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.492981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.492994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.493000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.493013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.493020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.493032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.493039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.493051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.493058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.493070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.493077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.493090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.493096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.861 [2024-11-26 07:35:09.493108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.861 [2024-11-26 07:35:09.493115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.493128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.493134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.493147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.493153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.493166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.493172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.493185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.493192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.493205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.493212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.493228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.493235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.493247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.493254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.493267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.493274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.862 [2024-11-26 07:35:09.494334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.862 [2024-11-26 07:35:09.494354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.862 [2024-11-26 07:35:09.494373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.862 [2024-11-26 07:35:09.494393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.862 [2024-11-26 07:35:09.494412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.862 [2024-11-26 07:35:09.494431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.862 [2024-11-26 07:35:09.494450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.862 [2024-11-26 07:35:09.494469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.494542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.494549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.495025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.495036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.495050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.495057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.495070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.495077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.495090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.495097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.495110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.495116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.495129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.495136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.862 [2024-11-26 07:35:09.495148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.862 [2024-11-26 07:35:09.495155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.863 [2024-11-26 07:35:09.495175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.863 [2024-11-26 07:35:09.495194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.863 [2024-11-26 07:35:09.495215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.863 [2024-11-26 07:35:09.495234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.863 [2024-11-26 07:35:09.495253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.863 [2024-11-26 07:35:09.495273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.863 [2024-11-26 07:35:09.495294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.863 [2024-11-26 07:35:09.495314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:57.863 [2024-11-26 07:35:09.495982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.863 [2024-11-26 07:35:09.495989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.496001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.501828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.501844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.501853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.501865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.501872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.501884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.501891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.501903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.501910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.501922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.501930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.501943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.501954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.501967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.501974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.501986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.501993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.502012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.502033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.502620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.502639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.502658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.502677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.502698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.864 [2024-11-26 07:35:09.502717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.864 [2024-11-26 07:35:09.502849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.864 [2024-11-26 07:35:09.502861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.502868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.502880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.502887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.502899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.502905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.502919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.502926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.502939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.502946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.502963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.502970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.502982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.502989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.865 [2024-11-26 07:35:09.503542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.865 [2024-11-26 07:35:09.503562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.865 [2024-11-26 07:35:09.503581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.865 [2024-11-26 07:35:09.503599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:57.865 [2024-11-26 07:35:09.503612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.503618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.503639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.503658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.503677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.503696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.503985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.503998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.504006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.504018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.504025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.504037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.504044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.504056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.866 [2024-11-26 07:35:09.504063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.504076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.504082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.504852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.504871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.504886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.504893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.504905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.504913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.504926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.504932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.504945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.504961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.504973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.504980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.504993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.505000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.505012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.505021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.505035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.505042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.505054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.505061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.505074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.505080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.505093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.505100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.505112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.505123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.505135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.866 [2024-11-26 07:35:09.505142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:57.866 [2024-11-26 07:35:09.505155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.867 [2024-11-26 07:35:09.505575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.505594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.505615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.505635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.505654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.505673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.505693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.505705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.505712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.506032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.506045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.506059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.506068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.506083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.506092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.506106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.506113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.506126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.506133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:57.867 [2024-11-26 07:35:09.506147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.867 [2024-11-26 07:35:09.506154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.868 [2024-11-26 07:35:09.506215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.868 [2024-11-26 07:35:09.506234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.868 [2024-11-26 07:35:09.506253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.868 [2024-11-26 07:35:09.506273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.868 [2024-11-26 07:35:09.506292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.868 [2024-11-26 07:35:09.506311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.506772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.506779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.507126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.507138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.507151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.507158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.507172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.507181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.507195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.507203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.507217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.507224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.507242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.507250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.868 [2024-11-26 07:35:09.507264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.868 [2024-11-26 07:35:09.507271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-11-26 07:35:09.507521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-11-26 07:35:09.507540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-11-26 07:35:09.507560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-11-26 07:35:09.507579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-11-26 07:35:09.507598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-11-26 07:35:09.507617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-11-26 07:35:09.507636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.869 [2024-11-26 07:35:09.507656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.507747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.507754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.869 [2024-11-26 07:35:09.508474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.869 [2024-11-26 07:35:09.508481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.508985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.508997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.870 [2024-11-26 07:35:09.509374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.870 [2024-11-26 07:35:09.509393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:57.870 [2024-11-26 07:35:09.509406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.870 [2024-11-26 07:35:09.509412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.509914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.871 [2024-11-26 07:35:09.509935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.871 [2024-11-26 07:35:09.509961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.871 [2024-11-26 07:35:09.509980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.509992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.871 [2024-11-26 07:35:09.509999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.871 [2024-11-26 07:35:09.510019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.871 [2024-11-26 07:35:09.510040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.510269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.510276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.513801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.513811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.513825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.513831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.513844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.513851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.513863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.513870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.513883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.513889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.513902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.513909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.513922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.513929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.871 [2024-11-26 07:35:09.514288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.871 [2024-11-26 07:35:09.514302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.872 [2024-11-26 07:35:09.514772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.872 [2024-11-26 07:35:09.514791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.872 [2024-11-26 07:35:09.514810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.872 [2024-11-26 07:35:09.514829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.872 [2024-11-26 07:35:09.514848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.872 [2024-11-26 07:35:09.514869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.872 [2024-11-26 07:35:09.514888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.872 [2024-11-26 07:35:09.514907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.872 [2024-11-26 07:35:09.514972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.872 [2024-11-26 07:35:09.514984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.514991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.873 [2024-11-26 07:35:09.515279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:57.873 [2024-11-26 07:35:09.515716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.873 [2024-11-26 07:35:09.515723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.515980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.515993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.516000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.516983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.516995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.517002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.517015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.517022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.517034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.517041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.517053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.517060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.517073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.517080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.517092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.874 [2024-11-26 07:35:09.517099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.517111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.517118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.517131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.517138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.517150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.517157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.874 [2024-11-26 07:35:09.517169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.874 [2024-11-26 07:35:09.517176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.517983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.517990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.875 [2024-11-26 07:35:09.518261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:57.875 [2024-11-26 07:35:09.518273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.518280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.518299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.518318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.518337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.518356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.518376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.518395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.518414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.518981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.518988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.519007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.519027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.519129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.519149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.519174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.519193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.876 [2024-11-26 07:35:09.519213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.519232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.519251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.519270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.519289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.519309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.876 [2024-11-26 07:35:09.519328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:57.876 [2024-11-26 07:35:09.519340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.877 [2024-11-26 07:35:09.519934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.877 [2024-11-26 07:35:09.519959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.877 [2024-11-26 07:35:09.519978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.519991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.877 [2024-11-26 07:35:09.519998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.520325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.877 [2024-11-26 07:35:09.520335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.520349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.877 [2024-11-26 07:35:09.520356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.520368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.877 [2024-11-26 07:35:09.520375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.520388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.877 [2024-11-26 07:35:09.520394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.520406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.877 [2024-11-26 07:35:09.520413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.877 [2024-11-26 07:35:09.520426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.878 [2024-11-26 07:35:09.520569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.878 [2024-11-26 07:35:09.520589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.878 [2024-11-26 07:35:09.520608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.878 [2024-11-26 07:35:09.520627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.878 [2024-11-26 07:35:09.520647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.878 [2024-11-26 07:35:09.520665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.520992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.520998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.521011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.521018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.521031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.521038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.521050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.521059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.521386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.521397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.521411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.521419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.521432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.521439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.521451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.521458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.521471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.521481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.521493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.521500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.878 [2024-11-26 07:35:09.521512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.878 [2024-11-26 07:35:09.521520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.521840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.879 [2024-11-26 07:35:09.521860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.879 [2024-11-26 07:35:09.521880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.879 [2024-11-26 07:35:09.521900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.879 [2024-11-26 07:35:09.521919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.879 [2024-11-26 07:35:09.521938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.879 [2024-11-26 07:35:09.521964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.879 [2024-11-26 07:35:09.521985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.521998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.879 [2024-11-26 07:35:09.522005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.879 [2024-11-26 07:35:09.522552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.879 [2024-11-26 07:35:09.522559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.880 [2024-11-26 07:35:09.522578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.880 [2024-11-26 07:35:09.522598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.880 [2024-11-26 07:35:09.522820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.880 [2024-11-26 07:35:09.522840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.522860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.522879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.522898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.522920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.522939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.522965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.522983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.522996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.880 [2024-11-26 07:35:09.523621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:57.880 [2024-11-26 07:35:09.523634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.881 [2024-11-26 07:35:09.523640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.523652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.881 [2024-11-26 07:35:09.523659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.523672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.881 [2024-11-26 07:35:09.523680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.523692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.881 [2024-11-26 07:35:09.523699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.523711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.523718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.523730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.523737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.523749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.523756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.523770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.523777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.523790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.523796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.523809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.523816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.881 [2024-11-26 07:35:09.524228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.881 [2024-11-26 07:35:09.524248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.881 [2024-11-26 07:35:09.524267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.881 [2024-11-26 07:35:09.524286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.881 [2024-11-26 07:35:09.524305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.881 [2024-11-26 07:35:09.524324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.881 [2024-11-26 07:35:09.524538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.881 [2024-11-26 07:35:09.524551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.524558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.524570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.524577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.524590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.524597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.524610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.524617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.524888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.524899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.524912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.524920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.524932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.524939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.524957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.524965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.524980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.524987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.882 [2024-11-26 07:35:09.525644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.882 [2024-11-26 07:35:09.525666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.882 [2024-11-26 07:35:09.525685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.882 [2024-11-26 07:35:09.525705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.882 [2024-11-26 07:35:09.525725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.882 [2024-11-26 07:35:09.525745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.882 [2024-11-26 07:35:09.525764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.882 [2024-11-26 07:35:09.525784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.882 [2024-11-26 07:35:09.525796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.525804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.525816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.525823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.525836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.525842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.525855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.525862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.525874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.525881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.525894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.525903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.525915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.525922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.525934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.525941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.525963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.525970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.525983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.525990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.526009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.526029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.526048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.526477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.526498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.526518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.526538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.526558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.526579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.883 [2024-11-26 07:35:09.526600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.526985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.526993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.527005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.527012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.527025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.527031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.527044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.527050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.527063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.527070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.527083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.527089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.527102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.527108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.527121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.883 [2024-11-26 07:35:09.527127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:57.883 [2024-11-26 07:35:09.527140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.884 [2024-11-26 07:35:09.527979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.527992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.527999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.528012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.528018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.528031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.528038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.528050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.528057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.528070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.528077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.528092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.884 [2024-11-26 07:35:09.528098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.884 [2024-11-26 07:35:09.528110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.528988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.528994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.529007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.529013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.529025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.529032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.529045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.529051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.529064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.529072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.529084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.529091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.885 [2024-11-26 07:35:09.529104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.885 [2024-11-26 07:35:09.529111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.529500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.529520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.529539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.529558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.529577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.529597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.529617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.529640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.529929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.529936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.530250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.530262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.530276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.530283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.530296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.530303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.530316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.886 [2024-11-26 07:35:09.530323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.530335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.530342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.530356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.530363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.530376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.530384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:57.886 [2024-11-26 07:35:09.530397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.886 [2024-11-26 07:35:09.530405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.530982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.530989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.531008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.531028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.531046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.531068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.531087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.531106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.531125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.531145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.531163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.531183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.887 [2024-11-26 07:35:09.531202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.887 [2024-11-26 07:35:09.531221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.887 [2024-11-26 07:35:09.531240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.887 [2024-11-26 07:35:09.531259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.887 [2024-11-26 07:35:09.531278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.887 [2024-11-26 07:35:09.531512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:57.887 [2024-11-26 07:35:09.531525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.887 [2024-11-26 07:35:09.531533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.888 [2024-11-26 07:35:09.531724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.888 [2024-11-26 07:35:09.531746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.888 [2024-11-26 07:35:09.531766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.888 [2024-11-26 07:35:09.531788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.888 [2024-11-26 07:35:09.531807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.888 [2024-11-26 07:35:09.531829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.531980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.531989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.888 [2024-11-26 07:35:09.532543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.888 [2024-11-26 07:35:09.532551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.532806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.532813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.533051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.533072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.533093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.533112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.533132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.533153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.889 [2024-11-26 07:35:09.533172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.889 [2024-11-26 07:35:09.533193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.889 [2024-11-26 07:35:09.533212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.889 [2024-11-26 07:35:09.533232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.889 [2024-11-26 07:35:09.533251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.889 [2024-11-26 07:35:09.533270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.889 [2024-11-26 07:35:09.533290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.889 [2024-11-26 07:35:09.533309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.533328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.533347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.533366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.533385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.533397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.533404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.536909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.536921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.536935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.889 [2024-11-26 07:35:09.536942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.889 [2024-11-26 07:35:09.536958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.536965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.536977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.536984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.536997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.537004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.537136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.537170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.537191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.537212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.537233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.537253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.537274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.537294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.890 [2024-11-26 07:35:09.537317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.890 [2024-11-26 07:35:09.537869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:57.890 [2024-11-26 07:35:09.537883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.537890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.537903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.537910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.537923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.537930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.537944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.537958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.537972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.537979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.537992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.537999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.538019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.538040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.538065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.538086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.538413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.538433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.538454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.538475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.538495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.891 [2024-11-26 07:35:09.538515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.891 [2024-11-26 07:35:09.538680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:57.891 [2024-11-26 07:35:09.538694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.538988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.538995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:09.539444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:09.539453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:57.892 10888.46 IOPS, 42.53 MiB/s [2024-11-26T06:35:25.992Z] 10110.71 IOPS, 39.49 MiB/s [2024-11-26T06:35:25.992Z] 9436.67 IOPS, 36.86 MiB/s [2024-11-26T06:35:25.992Z] 9041.19 IOPS, 35.32 MiB/s [2024-11-26T06:35:25.992Z] 9167.47 IOPS, 35.81 MiB/s [2024-11-26T06:35:25.992Z] 9281.17 IOPS, 36.25 MiB/s [2024-11-26T06:35:25.992Z] 9482.53 IOPS, 37.04 MiB/s [2024-11-26T06:35:25.992Z] 9683.25 IOPS, 37.83 MiB/s [2024-11-26T06:35:25.992Z] 9853.95 IOPS, 38.49 MiB/s [2024-11-26T06:35:25.992Z] 9916.32 IOPS, 38.74 MiB/s [2024-11-26T06:35:25.992Z] 9972.96 IOPS, 38.96 MiB/s [2024-11-26T06:35:25.992Z] 10045.62 IOPS, 39.24 MiB/s [2024-11-26T06:35:25.992Z] 10175.64 IOPS, 39.75 MiB/s [2024-11-26T06:35:25.992Z] 10293.15 IOPS, 40.21 MiB/s [2024-11-26T06:35:25.992Z] [2024-11-26 07:35:23.202898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:23.202942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:23.203000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.892 [2024-11-26 07:35:23.203009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:23.203023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.892 [2024-11-26 07:35:23.203030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:23.203042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.892 [2024-11-26 07:35:23.203049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:23.203062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.892 [2024-11-26 07:35:23.203069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:23.203082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:23.203089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:23.203101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.892 [2024-11-26 07:35:23.203108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:57.892 [2024-11-26 07:35:23.203120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-11-26 07:35:23.203127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.203140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-11-26 07:35:23.203147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.203160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-11-26 07:35:23.203167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.203180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-11-26 07:35:23.203187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.205005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-11-26 07:35:23.205028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.205045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-11-26 07:35:23.205057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.205070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-11-26 07:35:23.205076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.205089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-11-26 07:35:23.205096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.205108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.893 [2024-11-26 07:35:23.205116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.205128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.893 [2024-11-26 07:35:23.205135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.205147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.893 [2024-11-26 07:35:23.205155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.205167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.893 [2024-11-26 07:35:23.205174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:57.893 [2024-11-26 07:35:23.205186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.893 [2024-11-26 07:35:23.205193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:57.893 10347.33 IOPS, 40.42 MiB/s [2024-11-26T06:35:25.993Z] 10378.46 IOPS, 40.54 MiB/s [2024-11-26T06:35:25.993Z] Received shutdown signal, test time was about 28.674003 seconds 00:25:57.893 00:25:57.893 Latency(us) 00:25:57.893 [2024-11-26T06:35:25.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.893 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:57.893 Verification LBA range: start 0x0 length 0x4000 00:25:57.893 Nvme0n1 : 28.67 10388.08 40.58 0.00 0.00 12300.91 395.35 3078254.41 00:25:57.893 [2024-11-26T06:35:25.993Z] =================================================================================================================== 00:25:57.893 [2024-11-26T06:35:25.993Z] Total : 10388.08 40.58 0.00 0.00 12300.91 395.35 3078254.41 00:25:57.893 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.893 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:57.893 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:57.893 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:57.893 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:57.893 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:57.893 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:57.893 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:57.893 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:57.893 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:57.893 rmmod nvme_tcp 00:25:57.893 rmmod nvme_fabrics 00:25:58.153 rmmod nvme_keyring 00:25:58.153 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:58.153 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:58.153 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:58.153 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 846339 ']' 00:25:58.153 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 846339 00:25:58.153 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 846339 ']' 00:25:58.153 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 846339 00:25:58.153 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:58.153 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.153 07:35:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 846339 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 846339' 00:25:58.153 killing process with pid 846339 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 846339 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 846339 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.153 07:35:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:00.688 00:26:00.688 real 0m39.934s 00:26:00.688 user 1m48.963s 00:26:00.688 sys 0m11.134s 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:00.688 ************************************ 00:26:00.688 END TEST nvmf_host_multipath_status 00:26:00.688 ************************************ 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.688 ************************************ 00:26:00.688 START TEST nvmf_discovery_remove_ifc 00:26:00.688 ************************************ 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:00.688 * Looking for test storage... 00:26:00.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:00.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.688 --rc genhtml_branch_coverage=1 00:26:00.688 --rc genhtml_function_coverage=1 00:26:00.688 --rc genhtml_legend=1 00:26:00.688 --rc geninfo_all_blocks=1 00:26:00.688 --rc geninfo_unexecuted_blocks=1 00:26:00.688 00:26:00.688 ' 00:26:00.688 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:00.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.688 --rc genhtml_branch_coverage=1 00:26:00.688 --rc genhtml_function_coverage=1 00:26:00.688 --rc genhtml_legend=1 00:26:00.688 --rc geninfo_all_blocks=1 00:26:00.689 --rc geninfo_unexecuted_blocks=1 00:26:00.689 00:26:00.689 ' 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:00.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.689 --rc genhtml_branch_coverage=1 00:26:00.689 --rc genhtml_function_coverage=1 00:26:00.689 --rc genhtml_legend=1 00:26:00.689 --rc geninfo_all_blocks=1 00:26:00.689 --rc geninfo_unexecuted_blocks=1 00:26:00.689 00:26:00.689 ' 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:00.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.689 --rc genhtml_branch_coverage=1 00:26:00.689 --rc genhtml_function_coverage=1 00:26:00.689 --rc genhtml_legend=1 00:26:00.689 --rc geninfo_all_blocks=1 00:26:00.689 --rc geninfo_unexecuted_blocks=1 00:26:00.689 00:26:00.689 ' 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:00.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:00.689 07:35:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.961 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:05.962 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:05.962 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:05.962 Found net devices under 0000:86:00.0: cvl_0_0 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:05.962 Found net devices under 0000:86:00.1: cvl_0_1 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:05.962 07:35:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:05.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:26:05.962 00:26:05.962 --- 10.0.0.2 ping statistics --- 00:26:05.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.962 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:26:05.962 00:26:05.962 --- 10.0.0.1 ping statistics --- 00:26:05.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.962 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:05.962 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=855238 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 855238 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 855238 ']' 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.222 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.222 [2024-11-26 07:35:34.145282] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:26:06.222 [2024-11-26 07:35:34.145331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.222 [2024-11-26 07:35:34.212322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.222 [2024-11-26 07:35:34.253380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.222 [2024-11-26 07:35:34.253416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.222 [2024-11-26 07:35:34.253423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.222 [2024-11-26 07:35:34.253429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.222 [2024-11-26 07:35:34.253434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.222 [2024-11-26 07:35:34.253998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.481 [2024-11-26 07:35:34.396707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.481 [2024-11-26 07:35:34.404865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:06.481 null0 00:26:06.481 [2024-11-26 07:35:34.436867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=855261 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 855261 /tmp/host.sock 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 855261 ']' 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:06.481 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.481 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.481 [2024-11-26 07:35:34.507836] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:26:06.481 [2024-11-26 07:35:34.507878] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855261 ] 00:26:06.481 [2024-11-26 07:35:34.569513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.740 [2024-11-26 07:35:34.612831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.740 07:35:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.120 [2024-11-26 07:35:35.803116] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:08.120 [2024-11-26 07:35:35.803137] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:08.120 [2024-11-26 07:35:35.803152] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:08.120 [2024-11-26 07:35:35.891422] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:08.120 [2024-11-26 07:35:35.993203] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:08.120 [2024-11-26 07:35:35.993988] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x186e9f0:1 started. 00:26:08.120 [2024-11-26 07:35:35.995334] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:08.120 [2024-11-26 07:35:35.995372] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:08.120 [2024-11-26 07:35:35.995398] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:08.120 [2024-11-26 07:35:35.995410] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:08.120 [2024-11-26 07:35:35.995428] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:08.120 07:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.120 07:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:08.120 07:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.120 07:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.120 07:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.120 07:35:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.120 [2024-11-26 07:35:36.002135] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.120 86e9f0 was disconnected and freed. delete nvme_qpair. 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:08.120 07:35:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.496 07:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.496 07:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.496 07:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.496 07:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.496 07:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.496 07:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.496 07:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.496 07:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.496 07:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:09.497 07:35:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:10.433 07:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:10.433 07:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.433 07:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:10.433 07:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.433 07:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:10.433 07:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.433 07:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:10.433 07:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.433 07:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:10.433 07:35:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:11.480 07:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:11.480 07:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.480 07:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:11.480 07:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.480 07:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:11.480 07:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.480 07:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:11.480 07:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.480 07:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:11.480 07:35:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:12.416 07:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.416 07:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.416 07:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.416 07:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.416 07:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.416 07:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.416 07:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.416 07:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.416 07:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:12.416 07:35:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.352 07:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.352 07:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.352 07:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.352 07:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.352 07:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.352 07:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 07:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.352 07:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.352 [2024-11-26 07:35:41.436952] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:13.352 [2024-11-26 07:35:41.436992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.352 [2024-11-26 07:35:41.437018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.352 [2024-11-26 07:35:41.437027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.352 [2024-11-26 07:35:41.437034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.352 [2024-11-26 07:35:41.437042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.352 [2024-11-26 07:35:41.437049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.352 [2024-11-26 07:35:41.437056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.352 [2024-11-26 07:35:41.437063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.352 [2024-11-26 07:35:41.437071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.352 [2024-11-26 07:35:41.437077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.352 [2024-11-26 07:35:41.437084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184b220 is same with the state(6) to be set 00:26:13.352 07:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:13.352 07:35:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.611 [2024-11-26 07:35:41.446970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184b220 (9): Bad file descriptor 00:26:13.611 [2024-11-26 07:35:41.457005] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:13.611 [2024-11-26 07:35:41.457015] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:13.611 [2024-11-26 07:35:41.457020] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:13.611 [2024-11-26 07:35:41.457025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:13.612 [2024-11-26 07:35:41.457048] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:14.549 07:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.549 07:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.549 07:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.549 07:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.549 07:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.549 07:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.549 07:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.549 [2024-11-26 07:35:42.512052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:14.549 [2024-11-26 07:35:42.512099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x184b220 with addr=10.0.0.2, port=4420 00:26:14.549 [2024-11-26 07:35:42.512117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184b220 is same with the state(6) to be set 00:26:14.549 [2024-11-26 07:35:42.512149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x184b220 (9): Bad file descriptor 00:26:14.549 [2024-11-26 07:35:42.512574] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:14.549 [2024-11-26 07:35:42.512603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:14.549 [2024-11-26 07:35:42.512614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:14.549 [2024-11-26 07:35:42.512625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:14.549 [2024-11-26 07:35:42.512634] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:14.549 [2024-11-26 07:35:42.512642] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:14.549 [2024-11-26 07:35:42.512648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:14.549 [2024-11-26 07:35:42.512658] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:14.549 [2024-11-26 07:35:42.512664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:14.549 07:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.549 07:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:14.549 07:35:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.486 [2024-11-26 07:35:43.515144] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:15.486 [2024-11-26 07:35:43.515167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:15.486 [2024-11-26 07:35:43.515178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:15.486 [2024-11-26 07:35:43.515185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:15.486 [2024-11-26 07:35:43.515193] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:15.486 [2024-11-26 07:35:43.515199] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:15.486 [2024-11-26 07:35:43.515204] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:15.487 [2024-11-26 07:35:43.515209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:15.487 [2024-11-26 07:35:43.515230] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:15.487 [2024-11-26 07:35:43.515252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.487 [2024-11-26 07:35:43.515265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.487 [2024-11-26 07:35:43.515276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.487 [2024-11-26 07:35:43.515283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.487 [2024-11-26 07:35:43.515290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.487 [2024-11-26 07:35:43.515297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.487 [2024-11-26 07:35:43.515304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.487 [2024-11-26 07:35:43.515311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.487 [2024-11-26 07:35:43.515318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.487 [2024-11-26 07:35:43.515325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.487 [2024-11-26 07:35:43.515332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:15.487 [2024-11-26 07:35:43.515382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183a900 (9): Bad file descriptor 00:26:15.487 [2024-11-26 07:35:43.516409] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:15.487 [2024-11-26 07:35:43.516419] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:15.487 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.487 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.487 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.487 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.487 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.487 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.487 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.487 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:15.746 07:35:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.684 07:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.684 07:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.684 07:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.684 07:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.684 07:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.684 07:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.684 07:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.684 07:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.684 07:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:16.684 07:35:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.621 [2024-11-26 07:35:45.569024] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:17.621 [2024-11-26 07:35:45.569043] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:17.621 [2024-11-26 07:35:45.569058] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:17.622 [2024-11-26 07:35:45.657327] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:17.881 07:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.881 07:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.881 07:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.881 07:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.881 07:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.881 07:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.881 07:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.881 07:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.881 [2024-11-26 07:35:45.799134] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:17.881 [2024-11-26 07:35:45.799802] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x183f760:1 started. 00:26:17.881 [2024-11-26 07:35:45.800842] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:17.881 [2024-11-26 07:35:45.800872] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:17.881 [2024-11-26 07:35:45.800888] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:17.881 [2024-11-26 07:35:45.800900] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:17.881 [2024-11-26 07:35:45.800906] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:17.881 [2024-11-26 07:35:45.807352] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x183f760 was disconnected and freed. delete nvme_qpair. 00:26:17.881 07:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:17.881 07:35:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.821 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.821 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.821 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:18.821 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.821 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.821 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.821 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.821 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.821 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:18.821 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:18.821 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 855261 00:26:18.822 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 855261 ']' 00:26:18.822 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 855261 00:26:18.822 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:18.822 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.822 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 855261 00:26:19.081 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:19.081 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:19.081 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 855261' 00:26:19.081 killing process with pid 855261 00:26:19.081 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 855261 00:26:19.081 07:35:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 855261 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.081 rmmod nvme_tcp 00:26:19.081 rmmod nvme_fabrics 00:26:19.081 rmmod nvme_keyring 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 855238 ']' 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 855238 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 855238 ']' 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 855238 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.081 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 855238 00:26:19.340 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:19.340 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:19.340 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 855238' 00:26:19.340 killing process with pid 855238 00:26:19.340 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 855238 00:26:19.340 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 855238 00:26:19.340 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.340 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:19.340 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:19.341 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:19.341 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:19.341 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:19.341 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:19.341 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.341 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:19.341 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.341 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.341 07:35:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.879 07:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:21.879 00:26:21.879 real 0m21.096s 00:26:21.879 user 0m26.489s 00:26:21.879 sys 0m5.699s 00:26:21.879 07:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.879 07:35:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.879 ************************************ 00:26:21.879 END TEST nvmf_discovery_remove_ifc 00:26:21.879 ************************************ 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.880 ************************************ 00:26:21.880 START TEST nvmf_identify_kernel_target 00:26:21.880 ************************************ 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:21.880 * Looking for test storage... 00:26:21.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:21.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.880 --rc genhtml_branch_coverage=1 00:26:21.880 --rc genhtml_function_coverage=1 00:26:21.880 --rc genhtml_legend=1 00:26:21.880 --rc geninfo_all_blocks=1 00:26:21.880 --rc geninfo_unexecuted_blocks=1 00:26:21.880 00:26:21.880 ' 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:21.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.880 --rc genhtml_branch_coverage=1 00:26:21.880 --rc genhtml_function_coverage=1 00:26:21.880 --rc genhtml_legend=1 00:26:21.880 --rc geninfo_all_blocks=1 00:26:21.880 --rc geninfo_unexecuted_blocks=1 00:26:21.880 00:26:21.880 ' 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:21.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.880 --rc genhtml_branch_coverage=1 00:26:21.880 --rc genhtml_function_coverage=1 00:26:21.880 --rc genhtml_legend=1 00:26:21.880 --rc geninfo_all_blocks=1 00:26:21.880 --rc geninfo_unexecuted_blocks=1 00:26:21.880 00:26:21.880 ' 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:21.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.880 --rc genhtml_branch_coverage=1 00:26:21.880 --rc genhtml_function_coverage=1 00:26:21.880 --rc genhtml_legend=1 00:26:21.880 --rc geninfo_all_blocks=1 00:26:21.880 --rc geninfo_unexecuted_blocks=1 00:26:21.880 00:26:21.880 ' 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.880 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:21.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:21.881 07:35:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:27.154 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.154 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:27.154 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:27.154 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:27.154 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:27.154 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:27.154 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:27.155 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:27.155 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:27.155 Found net devices under 0000:86:00.0: cvl_0_0 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:27.155 Found net devices under 0000:86:00.1: cvl_0_1 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:27.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:26:27.155 00:26:27.155 --- 10.0.0.2 ping statistics --- 00:26:27.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.155 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:26:27.155 00:26:27.155 --- 10.0.0.1 ping statistics --- 00:26:27.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.155 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.155 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:27.156 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:27.156 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.156 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:27.156 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:27.156 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:27.156 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:27.156 07:35:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:27.156 07:35:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:29.692 Waiting for block devices as requested 00:26:29.692 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:29.692 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:29.951 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:29.951 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:29.951 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:29.951 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:30.210 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:30.210 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:30.210 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:30.210 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:30.470 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:30.470 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:30.470 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:30.729 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:30.729 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:30.729 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:30.729 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:30.989 No valid GPT data, bailing 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:30.989 07:35:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:30.989 00:26:30.989 Discovery Log Number of Records 2, Generation counter 2 00:26:30.989 =====Discovery Log Entry 0====== 00:26:30.989 trtype: tcp 00:26:30.989 adrfam: ipv4 00:26:30.989 subtype: current discovery subsystem 00:26:30.989 treq: not specified, sq flow control disable supported 00:26:30.989 portid: 1 00:26:30.989 trsvcid: 4420 00:26:30.989 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:30.989 traddr: 10.0.0.1 00:26:30.989 eflags: none 00:26:30.989 sectype: none 00:26:30.989 =====Discovery Log Entry 1====== 00:26:30.989 trtype: tcp 00:26:30.989 adrfam: ipv4 00:26:30.989 subtype: nvme subsystem 00:26:30.989 treq: not specified, sq flow control disable supported 00:26:30.989 portid: 1 00:26:30.989 trsvcid: 4420 00:26:30.989 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:30.989 traddr: 10.0.0.1 00:26:30.989 eflags: none 00:26:30.989 sectype: none 00:26:30.989 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:30.989 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:31.250 ===================================================== 00:26:31.250 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:31.250 ===================================================== 00:26:31.250 Controller Capabilities/Features 00:26:31.250 ================================ 00:26:31.250 Vendor ID: 0000 00:26:31.250 Subsystem Vendor ID: 0000 00:26:31.250 Serial Number: 81335d8fc9e838eec0db 00:26:31.250 Model Number: Linux 00:26:31.250 Firmware Version: 6.8.9-20 00:26:31.250 Recommended Arb Burst: 0 00:26:31.250 IEEE OUI Identifier: 00 00 00 00:26:31.250 Multi-path I/O 00:26:31.250 May have multiple subsystem ports: No 00:26:31.250 May have multiple controllers: No 00:26:31.250 Associated with SR-IOV VF: No 00:26:31.250 Max Data Transfer Size: Unlimited 00:26:31.250 Max Number of Namespaces: 0 00:26:31.250 Max Number of I/O Queues: 1024 00:26:31.250 NVMe Specification Version (VS): 1.3 00:26:31.250 NVMe Specification Version (Identify): 1.3 00:26:31.250 Maximum Queue Entries: 1024 00:26:31.250 Contiguous Queues Required: No 00:26:31.250 Arbitration Mechanisms Supported 00:26:31.250 Weighted Round Robin: Not Supported 00:26:31.250 Vendor Specific: Not Supported 00:26:31.250 Reset Timeout: 7500 ms 00:26:31.250 Doorbell Stride: 4 bytes 00:26:31.250 NVM Subsystem Reset: Not Supported 00:26:31.250 Command Sets Supported 00:26:31.250 NVM Command Set: Supported 00:26:31.250 Boot Partition: Not Supported 00:26:31.250 Memory Page Size Minimum: 4096 bytes 00:26:31.250 Memory Page Size Maximum: 4096 bytes 00:26:31.250 Persistent Memory Region: Not Supported 00:26:31.250 Optional Asynchronous Events Supported 00:26:31.250 Namespace Attribute Notices: Not Supported 00:26:31.250 Firmware Activation Notices: Not Supported 00:26:31.250 ANA Change Notices: Not Supported 00:26:31.250 PLE Aggregate Log Change Notices: Not Supported 00:26:31.250 LBA Status Info Alert Notices: Not Supported 00:26:31.250 EGE Aggregate Log Change Notices: Not Supported 00:26:31.250 Normal NVM Subsystem Shutdown event: Not Supported 00:26:31.250 Zone Descriptor Change Notices: Not Supported 00:26:31.250 Discovery Log Change Notices: Supported 00:26:31.250 Controller Attributes 00:26:31.250 128-bit Host Identifier: Not Supported 00:26:31.250 Non-Operational Permissive Mode: Not Supported 00:26:31.250 NVM Sets: Not Supported 00:26:31.250 Read Recovery Levels: Not Supported 00:26:31.250 Endurance Groups: Not Supported 00:26:31.250 Predictable Latency Mode: Not Supported 00:26:31.250 Traffic Based Keep ALive: Not Supported 00:26:31.250 Namespace Granularity: Not Supported 00:26:31.250 SQ Associations: Not Supported 00:26:31.250 UUID List: Not Supported 00:26:31.250 Multi-Domain Subsystem: Not Supported 00:26:31.250 Fixed Capacity Management: Not Supported 00:26:31.250 Variable Capacity Management: Not Supported 00:26:31.250 Delete Endurance Group: Not Supported 00:26:31.250 Delete NVM Set: Not Supported 00:26:31.250 Extended LBA Formats Supported: Not Supported 00:26:31.250 Flexible Data Placement Supported: Not Supported 00:26:31.250 00:26:31.250 Controller Memory Buffer Support 00:26:31.250 ================================ 00:26:31.250 Supported: No 00:26:31.250 00:26:31.250 Persistent Memory Region Support 00:26:31.250 ================================ 00:26:31.250 Supported: No 00:26:31.250 00:26:31.250 Admin Command Set Attributes 00:26:31.250 ============================ 00:26:31.250 Security Send/Receive: Not Supported 00:26:31.250 Format NVM: Not Supported 00:26:31.250 Firmware Activate/Download: Not Supported 00:26:31.250 Namespace Management: Not Supported 00:26:31.250 Device Self-Test: Not Supported 00:26:31.250 Directives: Not Supported 00:26:31.250 NVMe-MI: Not Supported 00:26:31.250 Virtualization Management: Not Supported 00:26:31.250 Doorbell Buffer Config: Not Supported 00:26:31.250 Get LBA Status Capability: Not Supported 00:26:31.250 Command & Feature Lockdown Capability: Not Supported 00:26:31.250 Abort Command Limit: 1 00:26:31.250 Async Event Request Limit: 1 00:26:31.250 Number of Firmware Slots: N/A 00:26:31.250 Firmware Slot 1 Read-Only: N/A 00:26:31.250 Firmware Activation Without Reset: N/A 00:26:31.250 Multiple Update Detection Support: N/A 00:26:31.250 Firmware Update Granularity: No Information Provided 00:26:31.250 Per-Namespace SMART Log: No 00:26:31.250 Asymmetric Namespace Access Log Page: Not Supported 00:26:31.250 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:31.250 Command Effects Log Page: Not Supported 00:26:31.250 Get Log Page Extended Data: Supported 00:26:31.250 Telemetry Log Pages: Not Supported 00:26:31.250 Persistent Event Log Pages: Not Supported 00:26:31.250 Supported Log Pages Log Page: May Support 00:26:31.250 Commands Supported & Effects Log Page: Not Supported 00:26:31.250 Feature Identifiers & Effects Log Page:May Support 00:26:31.250 NVMe-MI Commands & Effects Log Page: May Support 00:26:31.250 Data Area 4 for Telemetry Log: Not Supported 00:26:31.250 Error Log Page Entries Supported: 1 00:26:31.250 Keep Alive: Not Supported 00:26:31.250 00:26:31.250 NVM Command Set Attributes 00:26:31.250 ========================== 00:26:31.250 Submission Queue Entry Size 00:26:31.250 Max: 1 00:26:31.250 Min: 1 00:26:31.250 Completion Queue Entry Size 00:26:31.250 Max: 1 00:26:31.250 Min: 1 00:26:31.250 Number of Namespaces: 0 00:26:31.250 Compare Command: Not Supported 00:26:31.250 Write Uncorrectable Command: Not Supported 00:26:31.250 Dataset Management Command: Not Supported 00:26:31.250 Write Zeroes Command: Not Supported 00:26:31.250 Set Features Save Field: Not Supported 00:26:31.250 Reservations: Not Supported 00:26:31.250 Timestamp: Not Supported 00:26:31.250 Copy: Not Supported 00:26:31.250 Volatile Write Cache: Not Present 00:26:31.250 Atomic Write Unit (Normal): 1 00:26:31.250 Atomic Write Unit (PFail): 1 00:26:31.250 Atomic Compare & Write Unit: 1 00:26:31.250 Fused Compare & Write: Not Supported 00:26:31.250 Scatter-Gather List 00:26:31.250 SGL Command Set: Supported 00:26:31.250 SGL Keyed: Not Supported 00:26:31.250 SGL Bit Bucket Descriptor: Not Supported 00:26:31.250 SGL Metadata Pointer: Not Supported 00:26:31.250 Oversized SGL: Not Supported 00:26:31.250 SGL Metadata Address: Not Supported 00:26:31.250 SGL Offset: Supported 00:26:31.250 Transport SGL Data Block: Not Supported 00:26:31.250 Replay Protected Memory Block: Not Supported 00:26:31.250 00:26:31.250 Firmware Slot Information 00:26:31.250 ========================= 00:26:31.250 Active slot: 0 00:26:31.250 00:26:31.250 00:26:31.250 Error Log 00:26:31.250 ========= 00:26:31.250 00:26:31.250 Active Namespaces 00:26:31.250 ================= 00:26:31.250 Discovery Log Page 00:26:31.250 ================== 00:26:31.250 Generation Counter: 2 00:26:31.250 Number of Records: 2 00:26:31.250 Record Format: 0 00:26:31.250 00:26:31.250 Discovery Log Entry 0 00:26:31.250 ---------------------- 00:26:31.250 Transport Type: 3 (TCP) 00:26:31.250 Address Family: 1 (IPv4) 00:26:31.250 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:31.250 Entry Flags: 00:26:31.250 Duplicate Returned Information: 0 00:26:31.250 Explicit Persistent Connection Support for Discovery: 0 00:26:31.250 Transport Requirements: 00:26:31.250 Secure Channel: Not Specified 00:26:31.250 Port ID: 1 (0x0001) 00:26:31.250 Controller ID: 65535 (0xffff) 00:26:31.250 Admin Max SQ Size: 32 00:26:31.250 Transport Service Identifier: 4420 00:26:31.250 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:31.250 Transport Address: 10.0.0.1 00:26:31.250 Discovery Log Entry 1 00:26:31.250 ---------------------- 00:26:31.250 Transport Type: 3 (TCP) 00:26:31.250 Address Family: 1 (IPv4) 00:26:31.250 Subsystem Type: 2 (NVM Subsystem) 00:26:31.250 Entry Flags: 00:26:31.250 Duplicate Returned Information: 0 00:26:31.250 Explicit Persistent Connection Support for Discovery: 0 00:26:31.250 Transport Requirements: 00:26:31.250 Secure Channel: Not Specified 00:26:31.250 Port ID: 1 (0x0001) 00:26:31.250 Controller ID: 65535 (0xffff) 00:26:31.250 Admin Max SQ Size: 32 00:26:31.250 Transport Service Identifier: 4420 00:26:31.250 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:31.250 Transport Address: 10.0.0.1 00:26:31.250 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:31.250 get_feature(0x01) failed 00:26:31.250 get_feature(0x02) failed 00:26:31.250 get_feature(0x04) failed 00:26:31.250 ===================================================== 00:26:31.250 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:31.250 ===================================================== 00:26:31.250 Controller Capabilities/Features 00:26:31.250 ================================ 00:26:31.250 Vendor ID: 0000 00:26:31.250 Subsystem Vendor ID: 0000 00:26:31.250 Serial Number: e3e5e2f33971a5a8cc01 00:26:31.250 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:31.250 Firmware Version: 6.8.9-20 00:26:31.250 Recommended Arb Burst: 6 00:26:31.250 IEEE OUI Identifier: 00 00 00 00:26:31.250 Multi-path I/O 00:26:31.250 May have multiple subsystem ports: Yes 00:26:31.250 May have multiple controllers: Yes 00:26:31.250 Associated with SR-IOV VF: No 00:26:31.250 Max Data Transfer Size: Unlimited 00:26:31.250 Max Number of Namespaces: 1024 00:26:31.250 Max Number of I/O Queues: 128 00:26:31.250 NVMe Specification Version (VS): 1.3 00:26:31.250 NVMe Specification Version (Identify): 1.3 00:26:31.250 Maximum Queue Entries: 1024 00:26:31.250 Contiguous Queues Required: No 00:26:31.250 Arbitration Mechanisms Supported 00:26:31.250 Weighted Round Robin: Not Supported 00:26:31.250 Vendor Specific: Not Supported 00:26:31.250 Reset Timeout: 7500 ms 00:26:31.250 Doorbell Stride: 4 bytes 00:26:31.250 NVM Subsystem Reset: Not Supported 00:26:31.250 Command Sets Supported 00:26:31.250 NVM Command Set: Supported 00:26:31.250 Boot Partition: Not Supported 00:26:31.250 Memory Page Size Minimum: 4096 bytes 00:26:31.250 Memory Page Size Maximum: 4096 bytes 00:26:31.250 Persistent Memory Region: Not Supported 00:26:31.250 Optional Asynchronous Events Supported 00:26:31.250 Namespace Attribute Notices: Supported 00:26:31.250 Firmware Activation Notices: Not Supported 00:26:31.250 ANA Change Notices: Supported 00:26:31.250 PLE Aggregate Log Change Notices: Not Supported 00:26:31.250 LBA Status Info Alert Notices: Not Supported 00:26:31.250 EGE Aggregate Log Change Notices: Not Supported 00:26:31.250 Normal NVM Subsystem Shutdown event: Not Supported 00:26:31.250 Zone Descriptor Change Notices: Not Supported 00:26:31.250 Discovery Log Change Notices: Not Supported 00:26:31.250 Controller Attributes 00:26:31.250 128-bit Host Identifier: Supported 00:26:31.250 Non-Operational Permissive Mode: Not Supported 00:26:31.250 NVM Sets: Not Supported 00:26:31.250 Read Recovery Levels: Not Supported 00:26:31.250 Endurance Groups: Not Supported 00:26:31.250 Predictable Latency Mode: Not Supported 00:26:31.250 Traffic Based Keep ALive: Supported 00:26:31.250 Namespace Granularity: Not Supported 00:26:31.250 SQ Associations: Not Supported 00:26:31.250 UUID List: Not Supported 00:26:31.250 Multi-Domain Subsystem: Not Supported 00:26:31.250 Fixed Capacity Management: Not Supported 00:26:31.250 Variable Capacity Management: Not Supported 00:26:31.250 Delete Endurance Group: Not Supported 00:26:31.250 Delete NVM Set: Not Supported 00:26:31.250 Extended LBA Formats Supported: Not Supported 00:26:31.250 Flexible Data Placement Supported: Not Supported 00:26:31.250 00:26:31.250 Controller Memory Buffer Support 00:26:31.250 ================================ 00:26:31.250 Supported: No 00:26:31.250 00:26:31.250 Persistent Memory Region Support 00:26:31.250 ================================ 00:26:31.250 Supported: No 00:26:31.250 00:26:31.250 Admin Command Set Attributes 00:26:31.250 ============================ 00:26:31.250 Security Send/Receive: Not Supported 00:26:31.250 Format NVM: Not Supported 00:26:31.250 Firmware Activate/Download: Not Supported 00:26:31.250 Namespace Management: Not Supported 00:26:31.250 Device Self-Test: Not Supported 00:26:31.250 Directives: Not Supported 00:26:31.250 NVMe-MI: Not Supported 00:26:31.250 Virtualization Management: Not Supported 00:26:31.250 Doorbell Buffer Config: Not Supported 00:26:31.250 Get LBA Status Capability: Not Supported 00:26:31.250 Command & Feature Lockdown Capability: Not Supported 00:26:31.250 Abort Command Limit: 4 00:26:31.250 Async Event Request Limit: 4 00:26:31.250 Number of Firmware Slots: N/A 00:26:31.250 Firmware Slot 1 Read-Only: N/A 00:26:31.250 Firmware Activation Without Reset: N/A 00:26:31.250 Multiple Update Detection Support: N/A 00:26:31.250 Firmware Update Granularity: No Information Provided 00:26:31.250 Per-Namespace SMART Log: Yes 00:26:31.250 Asymmetric Namespace Access Log Page: Supported 00:26:31.250 ANA Transition Time : 10 sec 00:26:31.250 00:26:31.251 Asymmetric Namespace Access Capabilities 00:26:31.251 ANA Optimized State : Supported 00:26:31.251 ANA Non-Optimized State : Supported 00:26:31.251 ANA Inaccessible State : Supported 00:26:31.251 ANA Persistent Loss State : Supported 00:26:31.251 ANA Change State : Supported 00:26:31.251 ANAGRPID is not changed : No 00:26:31.251 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:31.251 00:26:31.251 ANA Group Identifier Maximum : 128 00:26:31.251 Number of ANA Group Identifiers : 128 00:26:31.251 Max Number of Allowed Namespaces : 1024 00:26:31.251 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:31.251 Command Effects Log Page: Supported 00:26:31.251 Get Log Page Extended Data: Supported 00:26:31.251 Telemetry Log Pages: Not Supported 00:26:31.251 Persistent Event Log Pages: Not Supported 00:26:31.251 Supported Log Pages Log Page: May Support 00:26:31.251 Commands Supported & Effects Log Page: Not Supported 00:26:31.251 Feature Identifiers & Effects Log Page:May Support 00:26:31.251 NVMe-MI Commands & Effects Log Page: May Support 00:26:31.251 Data Area 4 for Telemetry Log: Not Supported 00:26:31.251 Error Log Page Entries Supported: 128 00:26:31.251 Keep Alive: Supported 00:26:31.251 Keep Alive Granularity: 1000 ms 00:26:31.251 00:26:31.251 NVM Command Set Attributes 00:26:31.251 ========================== 00:26:31.251 Submission Queue Entry Size 00:26:31.251 Max: 64 00:26:31.251 Min: 64 00:26:31.251 Completion Queue Entry Size 00:26:31.251 Max: 16 00:26:31.251 Min: 16 00:26:31.251 Number of Namespaces: 1024 00:26:31.251 Compare Command: Not Supported 00:26:31.251 Write Uncorrectable Command: Not Supported 00:26:31.251 Dataset Management Command: Supported 00:26:31.251 Write Zeroes Command: Supported 00:26:31.251 Set Features Save Field: Not Supported 00:26:31.251 Reservations: Not Supported 00:26:31.251 Timestamp: Not Supported 00:26:31.251 Copy: Not Supported 00:26:31.251 Volatile Write Cache: Present 00:26:31.251 Atomic Write Unit (Normal): 1 00:26:31.251 Atomic Write Unit (PFail): 1 00:26:31.251 Atomic Compare & Write Unit: 1 00:26:31.251 Fused Compare & Write: Not Supported 00:26:31.251 Scatter-Gather List 00:26:31.251 SGL Command Set: Supported 00:26:31.251 SGL Keyed: Not Supported 00:26:31.251 SGL Bit Bucket Descriptor: Not Supported 00:26:31.251 SGL Metadata Pointer: Not Supported 00:26:31.251 Oversized SGL: Not Supported 00:26:31.251 SGL Metadata Address: Not Supported 00:26:31.251 SGL Offset: Supported 00:26:31.251 Transport SGL Data Block: Not Supported 00:26:31.251 Replay Protected Memory Block: Not Supported 00:26:31.251 00:26:31.251 Firmware Slot Information 00:26:31.251 ========================= 00:26:31.251 Active slot: 0 00:26:31.251 00:26:31.251 Asymmetric Namespace Access 00:26:31.251 =========================== 00:26:31.251 Change Count : 0 00:26:31.251 Number of ANA Group Descriptors : 1 00:26:31.251 ANA Group Descriptor : 0 00:26:31.251 ANA Group ID : 1 00:26:31.251 Number of NSID Values : 1 00:26:31.251 Change Count : 0 00:26:31.251 ANA State : 1 00:26:31.251 Namespace Identifier : 1 00:26:31.251 00:26:31.251 Commands Supported and Effects 00:26:31.251 ============================== 00:26:31.251 Admin Commands 00:26:31.251 -------------- 00:26:31.251 Get Log Page (02h): Supported 00:26:31.251 Identify (06h): Supported 00:26:31.251 Abort (08h): Supported 00:26:31.251 Set Features (09h): Supported 00:26:31.251 Get Features (0Ah): Supported 00:26:31.251 Asynchronous Event Request (0Ch): Supported 00:26:31.251 Keep Alive (18h): Supported 00:26:31.251 I/O Commands 00:26:31.251 ------------ 00:26:31.251 Flush (00h): Supported 00:26:31.251 Write (01h): Supported LBA-Change 00:26:31.251 Read (02h): Supported 00:26:31.251 Write Zeroes (08h): Supported LBA-Change 00:26:31.251 Dataset Management (09h): Supported 00:26:31.251 00:26:31.251 Error Log 00:26:31.251 ========= 00:26:31.251 Entry: 0 00:26:31.251 Error Count: 0x3 00:26:31.251 Submission Queue Id: 0x0 00:26:31.251 Command Id: 0x5 00:26:31.251 Phase Bit: 0 00:26:31.251 Status Code: 0x2 00:26:31.251 Status Code Type: 0x0 00:26:31.251 Do Not Retry: 1 00:26:31.251 Error Location: 0x28 00:26:31.251 LBA: 0x0 00:26:31.251 Namespace: 0x0 00:26:31.251 Vendor Log Page: 0x0 00:26:31.251 ----------- 00:26:31.251 Entry: 1 00:26:31.251 Error Count: 0x2 00:26:31.251 Submission Queue Id: 0x0 00:26:31.251 Command Id: 0x5 00:26:31.251 Phase Bit: 0 00:26:31.251 Status Code: 0x2 00:26:31.251 Status Code Type: 0x0 00:26:31.251 Do Not Retry: 1 00:26:31.251 Error Location: 0x28 00:26:31.251 LBA: 0x0 00:26:31.251 Namespace: 0x0 00:26:31.251 Vendor Log Page: 0x0 00:26:31.251 ----------- 00:26:31.251 Entry: 2 00:26:31.251 Error Count: 0x1 00:26:31.251 Submission Queue Id: 0x0 00:26:31.251 Command Id: 0x4 00:26:31.251 Phase Bit: 0 00:26:31.251 Status Code: 0x2 00:26:31.251 Status Code Type: 0x0 00:26:31.251 Do Not Retry: 1 00:26:31.251 Error Location: 0x28 00:26:31.251 LBA: 0x0 00:26:31.251 Namespace: 0x0 00:26:31.251 Vendor Log Page: 0x0 00:26:31.251 00:26:31.251 Number of Queues 00:26:31.251 ================ 00:26:31.251 Number of I/O Submission Queues: 128 00:26:31.251 Number of I/O Completion Queues: 128 00:26:31.251 00:26:31.251 ZNS Specific Controller Data 00:26:31.251 ============================ 00:26:31.251 Zone Append Size Limit: 0 00:26:31.251 00:26:31.251 00:26:31.251 Active Namespaces 00:26:31.251 ================= 00:26:31.251 get_feature(0x05) failed 00:26:31.251 Namespace ID:1 00:26:31.251 Command Set Identifier: NVM (00h) 00:26:31.251 Deallocate: Supported 00:26:31.251 Deallocated/Unwritten Error: Not Supported 00:26:31.251 Deallocated Read Value: Unknown 00:26:31.251 Deallocate in Write Zeroes: Not Supported 00:26:31.251 Deallocated Guard Field: 0xFFFF 00:26:31.251 Flush: Supported 00:26:31.251 Reservation: Not Supported 00:26:31.251 Namespace Sharing Capabilities: Multiple Controllers 00:26:31.251 Size (in LBAs): 1953525168 (931GiB) 00:26:31.251 Capacity (in LBAs): 1953525168 (931GiB) 00:26:31.251 Utilization (in LBAs): 1953525168 (931GiB) 00:26:31.251 UUID: 5ae40639-707b-43ba-93d6-ea056429ea0d 00:26:31.251 Thin Provisioning: Not Supported 00:26:31.251 Per-NS Atomic Units: Yes 00:26:31.251 Atomic Boundary Size (Normal): 0 00:26:31.251 Atomic Boundary Size (PFail): 0 00:26:31.251 Atomic Boundary Offset: 0 00:26:31.251 NGUID/EUI64 Never Reused: No 00:26:31.251 ANA group ID: 1 00:26:31.251 Namespace Write Protected: No 00:26:31.251 Number of LBA Formats: 1 00:26:31.251 Current LBA Format: LBA Format #00 00:26:31.251 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:31.251 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.251 rmmod nvme_tcp 00:26:31.251 rmmod nvme_fabrics 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.251 07:35:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.784 07:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.784 07:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:33.784 07:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:33.784 07:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:33.784 07:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:33.784 07:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:33.784 07:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:33.784 07:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:33.784 07:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:33.784 07:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:33.784 07:36:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:36.317 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:36.317 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:37.256 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:37.256 00:26:37.256 real 0m15.797s 00:26:37.256 user 0m4.004s 00:26:37.256 sys 0m8.163s 00:26:37.256 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:37.256 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:37.256 ************************************ 00:26:37.256 END TEST nvmf_identify_kernel_target 00:26:37.256 ************************************ 00:26:37.256 07:36:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:37.256 07:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:37.256 07:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:37.256 07:36:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.256 ************************************ 00:26:37.256 START TEST nvmf_auth_host 00:26:37.256 ************************************ 00:26:37.256 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:37.515 * Looking for test storage... 00:26:37.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.515 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:37.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.516 --rc genhtml_branch_coverage=1 00:26:37.516 --rc genhtml_function_coverage=1 00:26:37.516 --rc genhtml_legend=1 00:26:37.516 --rc geninfo_all_blocks=1 00:26:37.516 --rc geninfo_unexecuted_blocks=1 00:26:37.516 00:26:37.516 ' 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:37.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.516 --rc genhtml_branch_coverage=1 00:26:37.516 --rc genhtml_function_coverage=1 00:26:37.516 --rc genhtml_legend=1 00:26:37.516 --rc geninfo_all_blocks=1 00:26:37.516 --rc geninfo_unexecuted_blocks=1 00:26:37.516 00:26:37.516 ' 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:37.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.516 --rc genhtml_branch_coverage=1 00:26:37.516 --rc genhtml_function_coverage=1 00:26:37.516 --rc genhtml_legend=1 00:26:37.516 --rc geninfo_all_blocks=1 00:26:37.516 --rc geninfo_unexecuted_blocks=1 00:26:37.516 00:26:37.516 ' 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:37.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.516 --rc genhtml_branch_coverage=1 00:26:37.516 --rc genhtml_function_coverage=1 00:26:37.516 --rc genhtml_legend=1 00:26:37.516 --rc geninfo_all_blocks=1 00:26:37.516 --rc geninfo_unexecuted_blocks=1 00:26:37.516 00:26:37.516 ' 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:37.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:37.516 07:36:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:44.084 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:44.084 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:44.084 Found net devices under 0000:86:00.0: cvl_0_0 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:44.084 Found net devices under 0000:86:00.1: cvl_0_1 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.084 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.085 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:44.085 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.085 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.085 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.085 07:36:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:26:44.085 00:26:44.085 --- 10.0.0.2 ping statistics --- 00:26:44.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.085 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:26:44.085 00:26:44.085 --- 10.0.0.1 ping statistics --- 00:26:44.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.085 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=867738 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 867738 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 867738 ']' 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=55d7870da036d4db0bfc53b6fd0e0e5c 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rEw 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 55d7870da036d4db0bfc53b6fd0e0e5c 0 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 55d7870da036d4db0bfc53b6fd0e0e5c 0 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=55d7870da036d4db0bfc53b6fd0e0e5c 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rEw 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rEw 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rEw 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=992c376615e5fb98bcd38a5c8fea5ea896ca0fd8aa31062957297fdaed87e343 00:26:44.085 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MmE 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 992c376615e5fb98bcd38a5c8fea5ea896ca0fd8aa31062957297fdaed87e343 3 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 992c376615e5fb98bcd38a5c8fea5ea896ca0fd8aa31062957297fdaed87e343 3 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=992c376615e5fb98bcd38a5c8fea5ea896ca0fd8aa31062957297fdaed87e343 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MmE 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MmE 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MmE 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=99be9639c8af1d5a4b379a7ddbb6052f081a514506708ed8 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4ll 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 99be9639c8af1d5a4b379a7ddbb6052f081a514506708ed8 0 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 99be9639c8af1d5a4b379a7ddbb6052f081a514506708ed8 0 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=99be9639c8af1d5a4b379a7ddbb6052f081a514506708ed8 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4ll 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4ll 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.4ll 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a96a4aa12c1fc0526d27b40ab50fdd3488873ae85c6031bf 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4vz 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a96a4aa12c1fc0526d27b40ab50fdd3488873ae85c6031bf 2 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a96a4aa12c1fc0526d27b40ab50fdd3488873ae85c6031bf 2 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a96a4aa12c1fc0526d27b40ab50fdd3488873ae85c6031bf 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4vz 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4vz 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.4vz 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=20dd2153c89c6270b59a915a170ec566 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BHN 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 20dd2153c89c6270b59a915a170ec566 1 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 20dd2153c89c6270b59a915a170ec566 1 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=20dd2153c89c6270b59a915a170ec566 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BHN 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BHN 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.BHN 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba960161e0d08fa0896b6c39580fa556 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4vt 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba960161e0d08fa0896b6c39580fa556 1 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba960161e0d08fa0896b6c39580fa556 1 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:44.086 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba960161e0d08fa0896b6c39580fa556 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4vt 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4vt 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.4vt 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4d4259c15b71c11da55f450f9dada975673a9acd8c67dffd 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.x5Q 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4d4259c15b71c11da55f450f9dada975673a9acd8c67dffd 2 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4d4259c15b71c11da55f450f9dada975673a9acd8c67dffd 2 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4d4259c15b71c11da55f450f9dada975673a9acd8c67dffd 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.x5Q 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.x5Q 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.x5Q 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=61a5bd44648b3ac89de020353c0bfce7 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.JL4 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 61a5bd44648b3ac89de020353c0bfce7 0 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 61a5bd44648b3ac89de020353c0bfce7 0 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=61a5bd44648b3ac89de020353c0bfce7 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.JL4 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.JL4 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.JL4 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8e5e57b3d46693b5369e475f8812541b3ad0d0d3ba9c0f3a9fb3bc87c2c24565 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.T9k 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8e5e57b3d46693b5369e475f8812541b3ad0d0d3ba9c0f3a9fb3bc87c2c24565 3 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8e5e57b3d46693b5369e475f8812541b3ad0d0d3ba9c0f3a9fb3bc87c2c24565 3 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8e5e57b3d46693b5369e475f8812541b3ad0d0d3ba9c0f3a9fb3bc87c2c24565 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:44.087 07:36:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:44.087 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.T9k 00:26:44.087 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.T9k 00:26:44.087 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.T9k 00:26:44.087 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:44.087 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 867738 00:26:44.087 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 867738 ']' 00:26:44.087 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.087 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.087 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.087 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.087 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rEw 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MmE ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MmE 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.4ll 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.4vz ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4vz 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.BHN 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.4vt ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4vt 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.x5Q 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.JL4 ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.JL4 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.T9k 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:44.347 07:36:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:46.880 Waiting for block devices as requested 00:26:46.880 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:46.880 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:47.139 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:47.139 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:47.139 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:47.139 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:47.397 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:47.397 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:47.397 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:47.397 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:47.655 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:47.655 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:47.655 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:47.913 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:47.913 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:47.913 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:47.913 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:48.481 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:48.481 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:48.481 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:48.481 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:48.481 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:48.481 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:48.481 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:48.481 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:48.481 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:48.740 No valid GPT data, bailing 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:48.740 00:26:48.740 Discovery Log Number of Records 2, Generation counter 2 00:26:48.740 =====Discovery Log Entry 0====== 00:26:48.740 trtype: tcp 00:26:48.740 adrfam: ipv4 00:26:48.740 subtype: current discovery subsystem 00:26:48.740 treq: not specified, sq flow control disable supported 00:26:48.740 portid: 1 00:26:48.740 trsvcid: 4420 00:26:48.740 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:48.740 traddr: 10.0.0.1 00:26:48.740 eflags: none 00:26:48.740 sectype: none 00:26:48.740 =====Discovery Log Entry 1====== 00:26:48.740 trtype: tcp 00:26:48.740 adrfam: ipv4 00:26:48.740 subtype: nvme subsystem 00:26:48.740 treq: not specified, sq flow control disable supported 00:26:48.740 portid: 1 00:26:48.740 trsvcid: 4420 00:26:48.740 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:48.740 traddr: 10.0.0.1 00:26:48.740 eflags: none 00:26:48.740 sectype: none 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:48.740 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.741 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.000 nvme0n1 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.000 07:36:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.260 nvme0n1 00:26:49.260 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.260 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.260 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.261 nvme0n1 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.261 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.520 nvme0n1 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.520 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.780 nvme0n1 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.780 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 nvme0n1 00:26:50.040 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.040 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.040 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.040 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.040 07:36:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.040 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.299 nvme0n1 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:50.299 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.300 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.584 nvme0n1 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.584 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.843 nvme0n1 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.843 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.844 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.102 nvme0n1 00:26:51.102 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.102 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.102 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.103 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.103 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.103 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.103 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.103 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.103 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.103 07:36:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.103 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.360 nvme0n1 00:26:51.360 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.360 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.361 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.620 nvme0n1 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.620 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.879 nvme0n1 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.879 07:36:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.138 nvme0n1 00:26:52.138 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.138 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.138 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.138 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.138 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.138 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.138 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.138 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.138 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.138 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.397 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.656 nvme0n1 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.656 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.915 nvme0n1 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.915 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.916 07:36:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.175 nvme0n1 00:26:53.175 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.175 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.175 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.175 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.175 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.175 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.433 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.433 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.433 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.433 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.433 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.433 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.434 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.693 nvme0n1 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.693 07:36:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.260 nvme0n1 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.260 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.519 nvme0n1 00:26:54.519 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.519 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.519 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.519 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.519 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.519 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.519 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.519 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.519 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.519 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.777 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.778 07:36:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.036 nvme0n1 00:26:55.036 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.036 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.036 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.036 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.036 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.036 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.036 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.036 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.036 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.036 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.036 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.037 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.605 nvme0n1 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.605 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.606 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.606 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.606 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:55.606 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.864 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.865 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.865 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.865 07:36:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.432 nvme0n1 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.432 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.000 nvme0n1 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.000 07:36:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.567 nvme0n1 00:26:57.567 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.567 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.567 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.567 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.568 07:36:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.134 nvme0n1 00:26:58.134 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.134 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.134 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.134 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.135 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.135 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:58.393 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.394 nvme0n1 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.394 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.653 nvme0n1 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.653 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.912 nvme0n1 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.912 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.913 07:36:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.172 nvme0n1 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.172 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.173 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.432 nvme0n1 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.432 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 nvme0n1 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.691 nvme0n1 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.691 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.950 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.951 07:36:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.951 nvme0n1 00:26:59.951 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.951 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.951 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.951 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.951 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.951 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.210 nvme0n1 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.210 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.211 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.211 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.211 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.211 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:00.469 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.470 nvme0n1 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.470 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.729 nvme0n1 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.729 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.988 07:36:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.253 nvme0n1 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.253 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.512 nvme0n1 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.512 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.513 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.513 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.513 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.513 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.771 nvme0n1 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.771 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.772 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.772 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.772 07:36:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.030 nvme0n1 00:27:02.030 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.030 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.030 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.030 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.030 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.030 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.031 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.031 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.031 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.031 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.289 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.548 nvme0n1 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.548 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.116 nvme0n1 00:27:03.116 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.116 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.116 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.116 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.116 07:36:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.116 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.117 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.375 nvme0n1 00:27:03.375 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.375 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.375 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.375 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.375 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.375 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.634 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.893 nvme0n1 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.893 07:36:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.461 nvme0n1 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:04.461 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.462 07:36:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.031 nvme0n1 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.031 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.598 nvme0n1 00:27:05.598 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.598 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.598 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.598 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.599 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.599 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.857 07:36:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.425 nvme0n1 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.425 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.993 nvme0n1 00:27:06.993 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.993 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.993 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.993 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.993 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.993 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.993 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.993 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.993 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.993 07:36:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:06.993 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.994 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.561 nvme0n1 00:27:07.561 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.561 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.561 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.561 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.561 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.561 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.561 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.561 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.561 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.561 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.820 nvme0n1 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.820 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.821 07:36:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.080 nvme0n1 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.080 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.339 nvme0n1 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.339 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.598 nvme0n1 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.598 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.599 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.858 nvme0n1 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.858 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.859 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.117 nvme0n1 00:27:09.117 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.117 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.117 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.117 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.117 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.117 07:36:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.117 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.118 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.377 nvme0n1 00:27:09.377 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.377 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.377 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.377 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.377 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.378 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.638 nvme0n1 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.638 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.639 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.898 nvme0n1 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.898 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.157 nvme0n1 00:27:10.157 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.157 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.157 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.157 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.157 07:36:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.157 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.416 nvme0n1 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.416 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.675 nvme0n1 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.675 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.676 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.676 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.935 nvme0n1 00:27:10.935 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.935 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.935 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.935 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.935 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.935 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.935 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.935 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.935 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.935 07:36:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.935 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.194 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.194 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.194 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.194 nvme0n1 00:27:11.194 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.194 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.194 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.194 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.194 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.194 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.453 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.712 nvme0n1 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:11.712 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.713 07:36:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.971 nvme0n1 00:27:11.971 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.971 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.971 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.971 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.971 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.971 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.230 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.489 nvme0n1 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.489 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.749 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.009 nvme0n1 00:27:13.009 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.009 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.009 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.009 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.009 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.009 07:36:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.009 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.577 nvme0n1 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.577 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.836 nvme0n1 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTVkNzg3MGRhMDM2ZDRkYjBiZmM1M2I2ZmQwZTBlNWNXPEAO: 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: ]] 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTkyYzM3NjYxNWU1ZmI5OGJjZDM4YTVjOGZlYTVlYTg5NmNhMGZkOGFhMzEwNjI5NTcyOTdmZGFlZDg3ZTM0M/4VKgs=: 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.836 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.837 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.837 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.095 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.095 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.095 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.095 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.095 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.095 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.095 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.095 07:36:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.662 nvme0n1 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.662 07:36:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.237 nvme0n1 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.237 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.804 nvme0n1 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ0MjU5YzE1YjcxYzExZGE1NWY0NTBmOWRhZGE5NzU2NzNhOWFjZDhjNjdkZmZkI1zMxg==: 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: ]] 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjFhNWJkNDQ2NDhiM2FjODlkZTAyMDM1M2MwYmZjZTd+1Shi: 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:15.804 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.805 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.064 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.064 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.064 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.064 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.064 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.064 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.064 07:36:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.633 nvme0n1 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGU1ZTU3YjNkNDY2OTNiNTM2OWU0NzVmODgxMjU0MWIzYWQwZDBkM2JhOWMwZjNhOWZiM2JjODdjMmMyNDU2NcXV17Q=: 00:27:16.633 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.634 07:36:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.203 nvme0n1 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.203 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.204 request: 00:27:17.204 { 00:27:17.204 "name": "nvme0", 00:27:17.204 "trtype": "tcp", 00:27:17.204 "traddr": "10.0.0.1", 00:27:17.204 "adrfam": "ipv4", 00:27:17.204 "trsvcid": "4420", 00:27:17.204 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:17.204 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:17.204 "prchk_reftag": false, 00:27:17.204 "prchk_guard": false, 00:27:17.204 "hdgst": false, 00:27:17.204 "ddgst": false, 00:27:17.204 "allow_unrecognized_csi": false, 00:27:17.204 "method": "bdev_nvme_attach_controller", 00:27:17.204 "req_id": 1 00:27:17.204 } 00:27:17.204 Got JSON-RPC error response 00:27:17.204 response: 00:27:17.204 { 00:27:17.204 "code": -5, 00:27:17.204 "message": "Input/output error" 00:27:17.204 } 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.204 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.464 request: 00:27:17.464 { 00:27:17.464 "name": "nvme0", 00:27:17.464 "trtype": "tcp", 00:27:17.464 "traddr": "10.0.0.1", 00:27:17.464 "adrfam": "ipv4", 00:27:17.464 "trsvcid": "4420", 00:27:17.464 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:17.464 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:17.464 "prchk_reftag": false, 00:27:17.464 "prchk_guard": false, 00:27:17.464 "hdgst": false, 00:27:17.464 "ddgst": false, 00:27:17.464 "dhchap_key": "key2", 00:27:17.464 "allow_unrecognized_csi": false, 00:27:17.464 "method": "bdev_nvme_attach_controller", 00:27:17.464 "req_id": 1 00:27:17.464 } 00:27:17.464 Got JSON-RPC error response 00:27:17.464 response: 00:27:17.464 { 00:27:17.464 "code": -5, 00:27:17.464 "message": "Input/output error" 00:27:17.464 } 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.464 request: 00:27:17.464 { 00:27:17.464 "name": "nvme0", 00:27:17.464 "trtype": "tcp", 00:27:17.464 "traddr": "10.0.0.1", 00:27:17.464 "adrfam": "ipv4", 00:27:17.464 "trsvcid": "4420", 00:27:17.464 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:17.464 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:17.464 "prchk_reftag": false, 00:27:17.464 "prchk_guard": false, 00:27:17.464 "hdgst": false, 00:27:17.464 "ddgst": false, 00:27:17.464 "dhchap_key": "key1", 00:27:17.464 "dhchap_ctrlr_key": "ckey2", 00:27:17.464 "allow_unrecognized_csi": false, 00:27:17.464 "method": "bdev_nvme_attach_controller", 00:27:17.464 "req_id": 1 00:27:17.464 } 00:27:17.464 Got JSON-RPC error response 00:27:17.464 response: 00:27:17.464 { 00:27:17.464 "code": -5, 00:27:17.464 "message": "Input/output error" 00:27:17.464 } 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.464 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.724 nvme0n1 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.724 request: 00:27:17.724 { 00:27:17.724 "name": "nvme0", 00:27:17.724 "dhchap_key": "key1", 00:27:17.724 "dhchap_ctrlr_key": "ckey2", 00:27:17.724 "method": "bdev_nvme_set_keys", 00:27:17.724 "req_id": 1 00:27:17.724 } 00:27:17.724 Got JSON-RPC error response 00:27:17.724 response: 00:27:17.724 { 00:27:17.724 "code": -13, 00:27:17.724 "message": "Permission denied" 00:27:17.724 } 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.724 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.983 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.983 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:17.983 07:36:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:18.920 07:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.920 07:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:18.920 07:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.920 07:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.920 07:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.920 07:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:18.920 07:36:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTliZTk2MzljOGFmMWQ1YTRiMzc5YTdkZGJiNjA1MmYwODFhNTE0NTA2NzA4ZWQ4ObVwig==: 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: ]] 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk2YTRhYTEyYzFmYzA1MjZkMjdiNDBhYjUwZmRkMzQ4ODg3M2FlODVjNjAzMWJm6i0euA==: 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.857 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.116 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.116 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.116 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.117 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:20.117 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.117 07:36:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.117 nvme0n1 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjBkZDIxNTNjODljNjI3MGI1OWE5MTVhMTcwZWM1NjaTKXMJ: 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: ]] 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmE5NjAxNjFlMGQwOGZhMDg5NmI2YzM5NTgwZmE1NTaMqysl: 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.117 request: 00:27:20.117 { 00:27:20.117 "name": "nvme0", 00:27:20.117 "dhchap_key": "key2", 00:27:20.117 "dhchap_ctrlr_key": "ckey1", 00:27:20.117 "method": "bdev_nvme_set_keys", 00:27:20.117 "req_id": 1 00:27:20.117 } 00:27:20.117 Got JSON-RPC error response 00:27:20.117 response: 00:27:20.117 { 00:27:20.117 "code": -13, 00:27:20.117 "message": "Permission denied" 00:27:20.117 } 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.117 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.376 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:20.376 07:36:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:21.317 rmmod nvme_tcp 00:27:21.317 rmmod nvme_fabrics 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 867738 ']' 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 867738 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 867738 ']' 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 867738 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 867738 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 867738' 00:27:21.317 killing process with pid 867738 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 867738 00:27:21.317 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 867738 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.578 07:36:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.483 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:23.743 07:36:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:26.523 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:26.523 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:27.460 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:27.460 07:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rEw /tmp/spdk.key-null.4ll /tmp/spdk.key-sha256.BHN /tmp/spdk.key-sha384.x5Q /tmp/spdk.key-sha512.T9k /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:27.460 07:36:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:30.749 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:30.749 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:30.749 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:30.749 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:30.749 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:30.749 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:30.749 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:30.749 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:30.749 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:30.749 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:30.749 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:30.749 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:30.750 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:30.750 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:30.750 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:30.750 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:30.750 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:30.750 00:27:30.750 real 0m52.976s 00:27:30.750 user 0m47.350s 00:27:30.750 sys 0m11.994s 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.750 ************************************ 00:27:30.750 END TEST nvmf_auth_host 00:27:30.750 ************************************ 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.750 ************************************ 00:27:30.750 START TEST nvmf_digest 00:27:30.750 ************************************ 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:30.750 * Looking for test storage... 00:27:30.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:30.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.750 --rc genhtml_branch_coverage=1 00:27:30.750 --rc genhtml_function_coverage=1 00:27:30.750 --rc genhtml_legend=1 00:27:30.750 --rc geninfo_all_blocks=1 00:27:30.750 --rc geninfo_unexecuted_blocks=1 00:27:30.750 00:27:30.750 ' 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:30.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.750 --rc genhtml_branch_coverage=1 00:27:30.750 --rc genhtml_function_coverage=1 00:27:30.750 --rc genhtml_legend=1 00:27:30.750 --rc geninfo_all_blocks=1 00:27:30.750 --rc geninfo_unexecuted_blocks=1 00:27:30.750 00:27:30.750 ' 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:30.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.750 --rc genhtml_branch_coverage=1 00:27:30.750 --rc genhtml_function_coverage=1 00:27:30.750 --rc genhtml_legend=1 00:27:30.750 --rc geninfo_all_blocks=1 00:27:30.750 --rc geninfo_unexecuted_blocks=1 00:27:30.750 00:27:30.750 ' 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:30.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:30.750 --rc genhtml_branch_coverage=1 00:27:30.750 --rc genhtml_function_coverage=1 00:27:30.750 --rc genhtml_legend=1 00:27:30.750 --rc geninfo_all_blocks=1 00:27:30.750 --rc geninfo_unexecuted_blocks=1 00:27:30.750 00:27:30.750 ' 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:30.750 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:30.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:30.751 07:36:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:36.024 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:36.024 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:36.024 Found net devices under 0000:86:00.0: cvl_0_0 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:36.024 Found net devices under 0000:86:00.1: cvl_0_1 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.024 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:36.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:27:36.284 00:27:36.284 --- 10.0.0.2 ping statistics --- 00:27:36.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.284 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:27:36.284 00:27:36.284 --- 10.0.0.1 ping statistics --- 00:27:36.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.284 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:36.284 ************************************ 00:27:36.284 START TEST nvmf_digest_clean 00:27:36.284 ************************************ 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=881505 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 881505 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 881505 ']' 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.284 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:36.563 [2024-11-26 07:37:04.393199] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:27:36.563 [2024-11-26 07:37:04.393241] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.563 [2024-11-26 07:37:04.460435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.563 [2024-11-26 07:37:04.504436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.563 [2024-11-26 07:37:04.504471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.563 [2024-11-26 07:37:04.504478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.563 [2024-11-26 07:37:04.504484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.563 [2024-11-26 07:37:04.504489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.563 [2024-11-26 07:37:04.505075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.563 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.563 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:36.563 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:36.563 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:36.563 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:36.563 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.563 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:36.563 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:36.563 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:36.563 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.563 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:36.563 null0 00:27:36.822 [2024-11-26 07:37:04.661102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.822 [2024-11-26 07:37:04.685301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=881544 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 881544 /var/tmp/bperf.sock 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 881544 ']' 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:36.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:36.822 [2024-11-26 07:37:04.738140] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:27:36.822 [2024-11-26 07:37:04.738180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881544 ] 00:27:36.822 [2024-11-26 07:37:04.799337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.822 [2024-11-26 07:37:04.839951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:36.822 07:37:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:37.080 07:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.080 07:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.339 nvme0n1 00:27:37.339 07:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:37.339 07:37:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:37.598 Running I/O for 2 seconds... 00:27:39.468 23618.00 IOPS, 92.26 MiB/s [2024-11-26T06:37:07.568Z] 24073.50 IOPS, 94.04 MiB/s 00:27:39.468 Latency(us) 00:27:39.468 [2024-11-26T06:37:07.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.468 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:39.468 nvme0n1 : 2.01 24069.41 94.02 0.00 0.00 5312.89 2749.66 11853.47 00:27:39.468 [2024-11-26T06:37:07.568Z] =================================================================================================================== 00:27:39.468 [2024-11-26T06:37:07.568Z] Total : 24069.41 94.02 0.00 0.00 5312.89 2749.66 11853.47 00:27:39.468 { 00:27:39.468 "results": [ 00:27:39.468 { 00:27:39.468 "job": "nvme0n1", 00:27:39.468 "core_mask": "0x2", 00:27:39.468 "workload": "randread", 00:27:39.468 "status": "finished", 00:27:39.468 "queue_depth": 128, 00:27:39.468 "io_size": 4096, 00:27:39.468 "runtime": 2.005658, 00:27:39.468 "iops": 24069.407645770116, 00:27:39.468 "mibps": 94.02112361628951, 00:27:39.468 "io_failed": 0, 00:27:39.468 "io_timeout": 0, 00:27:39.468 "avg_latency_us": 5312.888703253552, 00:27:39.468 "min_latency_us": 2749.662608695652, 00:27:39.468 "max_latency_us": 11853.467826086957 00:27:39.468 } 00:27:39.468 ], 00:27:39.468 "core_count": 1 00:27:39.468 } 00:27:39.468 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:39.468 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:39.468 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:39.468 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:39.468 | select(.opcode=="crc32c") 00:27:39.468 | "\(.module_name) \(.executed)"' 00:27:39.468 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 881544 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 881544 ']' 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 881544 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 881544 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 881544' 00:27:39.727 killing process with pid 881544 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 881544 00:27:39.727 Received shutdown signal, test time was about 2.000000 seconds 00:27:39.727 00:27:39.727 Latency(us) 00:27:39.727 [2024-11-26T06:37:07.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.727 [2024-11-26T06:37:07.827Z] =================================================================================================================== 00:27:39.727 [2024-11-26T06:37:07.827Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:39.727 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 881544 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=882019 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 882019 /var/tmp/bperf.sock 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 882019 ']' 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:39.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.986 07:37:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:39.986 [2024-11-26 07:37:07.967794] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:27:39.986 [2024-11-26 07:37:07.967842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882019 ] 00:27:39.986 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:39.986 Zero copy mechanism will not be used. 00:27:39.986 [2024-11-26 07:37:08.029148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.986 [2024-11-26 07:37:08.073033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.245 07:37:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.245 07:37:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:40.245 07:37:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:40.245 07:37:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:40.245 07:37:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:40.505 07:37:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:40.505 07:37:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:40.763 nvme0n1 00:27:40.763 07:37:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:40.763 07:37:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:40.763 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:40.763 Zero copy mechanism will not be used. 00:27:40.763 Running I/O for 2 seconds... 00:27:42.634 5060.00 IOPS, 632.50 MiB/s [2024-11-26T06:37:10.735Z] 5412.00 IOPS, 676.50 MiB/s 00:27:42.635 Latency(us) 00:27:42.635 [2024-11-26T06:37:10.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.635 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:42.635 nvme0n1 : 2.00 5411.93 676.49 0.00 0.00 2953.81 740.84 9687.93 00:27:42.635 [2024-11-26T06:37:10.735Z] =================================================================================================================== 00:27:42.635 [2024-11-26T06:37:10.735Z] Total : 5411.93 676.49 0.00 0.00 2953.81 740.84 9687.93 00:27:42.894 { 00:27:42.894 "results": [ 00:27:42.894 { 00:27:42.894 "job": "nvme0n1", 00:27:42.894 "core_mask": "0x2", 00:27:42.894 "workload": "randread", 00:27:42.894 "status": "finished", 00:27:42.894 "queue_depth": 16, 00:27:42.894 "io_size": 131072, 00:27:42.894 "runtime": 2.002982, 00:27:42.894 "iops": 5411.93081116056, 00:27:42.894 "mibps": 676.49135139507, 00:27:42.894 "io_failed": 0, 00:27:42.894 "io_timeout": 0, 00:27:42.894 "avg_latency_us": 2953.811539868442, 00:27:42.894 "min_latency_us": 740.8417391304348, 00:27:42.894 "max_latency_us": 9687.93043478261 00:27:42.894 } 00:27:42.894 ], 00:27:42.894 "core_count": 1 00:27:42.894 } 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:42.894 | select(.opcode=="crc32c") 00:27:42.894 | "\(.module_name) \(.executed)"' 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 882019 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 882019 ']' 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 882019 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:42.894 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 882019 00:27:43.153 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:43.153 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:43.153 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 882019' 00:27:43.153 killing process with pid 882019 00:27:43.153 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 882019 00:27:43.153 Received shutdown signal, test time was about 2.000000 seconds 00:27:43.153 00:27:43.153 Latency(us) 00:27:43.153 [2024-11-26T06:37:11.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.153 [2024-11-26T06:37:11.253Z] =================================================================================================================== 00:27:43.153 [2024-11-26T06:37:11.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:43.153 07:37:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 882019 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=882499 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 882499 /var/tmp/bperf.sock 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 882499 ']' 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:43.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.153 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:43.153 [2024-11-26 07:37:11.184107] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:27:43.153 [2024-11-26 07:37:11.184155] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882499 ] 00:27:43.412 [2024-11-26 07:37:11.247427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.412 [2024-11-26 07:37:11.291534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.412 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.412 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:43.412 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:43.412 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:43.412 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:43.672 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:43.672 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:43.931 nvme0n1 00:27:43.931 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:43.931 07:37:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:43.931 Running I/O for 2 seconds... 00:27:46.242 27798.00 IOPS, 108.59 MiB/s [2024-11-26T06:37:14.342Z] 27933.00 IOPS, 109.11 MiB/s 00:27:46.242 Latency(us) 00:27:46.242 [2024-11-26T06:37:14.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.242 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:46.242 nvme0n1 : 2.01 27947.79 109.17 0.00 0.00 4573.17 2308.01 11055.64 00:27:46.242 [2024-11-26T06:37:14.342Z] =================================================================================================================== 00:27:46.242 [2024-11-26T06:37:14.342Z] Total : 27947.79 109.17 0.00 0.00 4573.17 2308.01 11055.64 00:27:46.242 { 00:27:46.242 "results": [ 00:27:46.242 { 00:27:46.242 "job": "nvme0n1", 00:27:46.242 "core_mask": "0x2", 00:27:46.242 "workload": "randwrite", 00:27:46.242 "status": "finished", 00:27:46.242 "queue_depth": 128, 00:27:46.242 "io_size": 4096, 00:27:46.242 "runtime": 2.005776, 00:27:46.242 "iops": 27947.78679174544, 00:27:46.242 "mibps": 109.17104215525562, 00:27:46.242 "io_failed": 0, 00:27:46.242 "io_timeout": 0, 00:27:46.242 "avg_latency_us": 4573.166935332127, 00:27:46.242 "min_latency_us": 2308.006956521739, 00:27:46.242 "max_latency_us": 11055.638260869566 00:27:46.242 } 00:27:46.242 ], 00:27:46.242 "core_count": 1 00:27:46.242 } 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:46.242 | select(.opcode=="crc32c") 00:27:46.242 | "\(.module_name) \(.executed)"' 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 882499 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 882499 ']' 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 882499 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 882499 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 882499' 00:27:46.242 killing process with pid 882499 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 882499 00:27:46.242 Received shutdown signal, test time was about 2.000000 seconds 00:27:46.242 00:27:46.242 Latency(us) 00:27:46.242 [2024-11-26T06:37:14.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.242 [2024-11-26T06:37:14.342Z] =================================================================================================================== 00:27:46.242 [2024-11-26T06:37:14.342Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:46.242 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 882499 00:27:46.501 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:46.501 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:46.501 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:46.501 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:46.501 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:46.501 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:46.501 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:46.502 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=883177 00:27:46.502 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 883177 /var/tmp/bperf.sock 00:27:46.502 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 883177 ']' 00:27:46.502 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:46.502 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:46.502 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.502 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:46.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:46.502 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.502 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:46.502 [2024-11-26 07:37:14.448852] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:27:46.502 [2024-11-26 07:37:14.448900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883177 ] 00:27:46.502 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:46.502 Zero copy mechanism will not be used. 00:27:46.502 [2024-11-26 07:37:14.510743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.502 [2024-11-26 07:37:14.554510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.760 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:46.760 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:46.760 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:46.760 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:46.760 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:47.019 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:47.019 07:37:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:47.278 nvme0n1 00:27:47.278 07:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:47.278 07:37:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:47.278 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:47.278 Zero copy mechanism will not be used. 00:27:47.278 Running I/O for 2 seconds... 00:27:49.591 6811.00 IOPS, 851.38 MiB/s [2024-11-26T06:37:17.691Z] 6903.50 IOPS, 862.94 MiB/s 00:27:49.591 Latency(us) 00:27:49.591 [2024-11-26T06:37:17.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.591 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:49.591 nvme0n1 : 2.00 6897.48 862.19 0.00 0.00 2315.21 1652.65 9459.98 00:27:49.591 [2024-11-26T06:37:17.691Z] =================================================================================================================== 00:27:49.591 [2024-11-26T06:37:17.691Z] Total : 6897.48 862.19 0.00 0.00 2315.21 1652.65 9459.98 00:27:49.591 { 00:27:49.591 "results": [ 00:27:49.591 { 00:27:49.591 "job": "nvme0n1", 00:27:49.591 "core_mask": "0x2", 00:27:49.591 "workload": "randwrite", 00:27:49.591 "status": "finished", 00:27:49.591 "queue_depth": 16, 00:27:49.591 "io_size": 131072, 00:27:49.591 "runtime": 2.004644, 00:27:49.591 "iops": 6897.48404205435, 00:27:49.591 "mibps": 862.1855052567937, 00:27:49.591 "io_failed": 0, 00:27:49.591 "io_timeout": 0, 00:27:49.591 "avg_latency_us": 2315.205651953802, 00:27:49.591 "min_latency_us": 1652.6469565217392, 00:27:49.591 "max_latency_us": 9459.979130434782 00:27:49.591 } 00:27:49.591 ], 00:27:49.591 "core_count": 1 00:27:49.591 } 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:49.591 | select(.opcode=="crc32c") 00:27:49.591 | "\(.module_name) \(.executed)"' 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 883177 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 883177 ']' 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 883177 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883177 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883177' 00:27:49.591 killing process with pid 883177 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 883177 00:27:49.591 Received shutdown signal, test time was about 2.000000 seconds 00:27:49.591 00:27:49.591 Latency(us) 00:27:49.591 [2024-11-26T06:37:17.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.591 [2024-11-26T06:37:17.691Z] =================================================================================================================== 00:27:49.591 [2024-11-26T06:37:17.691Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:49.591 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 883177 00:27:49.850 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 881505 00:27:49.850 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 881505 ']' 00:27:49.851 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 881505 00:27:49.851 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:49.851 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.851 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 881505 00:27:49.851 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:49.851 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:49.851 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 881505' 00:27:49.851 killing process with pid 881505 00:27:49.851 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 881505 00:27:49.851 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 881505 00:27:50.110 00:27:50.110 real 0m13.615s 00:27:50.110 user 0m25.945s 00:27:50.110 sys 0m4.488s 00:27:50.110 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.110 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:50.110 ************************************ 00:27:50.110 END TEST nvmf_digest_clean 00:27:50.110 ************************************ 00:27:50.110 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:50.110 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:50.110 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:50.110 07:37:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.110 ************************************ 00:27:50.110 START TEST nvmf_digest_error 00:27:50.110 ************************************ 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=883679 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 883679 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 883679 ']' 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.110 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.111 [2024-11-26 07:37:18.080633] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:27:50.111 [2024-11-26 07:37:18.080676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.111 [2024-11-26 07:37:18.146595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.111 [2024-11-26 07:37:18.182736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.111 [2024-11-26 07:37:18.182773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.111 [2024-11-26 07:37:18.182781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.111 [2024-11-26 07:37:18.182787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.111 [2024-11-26 07:37:18.182793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.111 [2024-11-26 07:37:18.183385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.370 [2024-11-26 07:37:18.267855] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.370 null0 00:27:50.370 [2024-11-26 07:37:18.359710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.370 [2024-11-26 07:37:18.383896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=883726 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 883726 /var/tmp/bperf.sock 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 883726 ']' 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:50.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.370 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.370 [2024-11-26 07:37:18.422089] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:27:50.370 [2024-11-26 07:37:18.422132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883726 ] 00:27:50.630 [2024-11-26 07:37:18.484779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.630 [2024-11-26 07:37:18.527306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.630 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.630 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:50.630 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:50.630 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:50.889 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:50.889 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.889 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.889 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.889 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:50.889 07:37:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:51.148 nvme0n1 00:27:51.148 07:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:51.148 07:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.148 07:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:51.148 07:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.148 07:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:51.148 07:37:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:51.148 Running I/O for 2 seconds... 00:27:51.148 [2024-11-26 07:37:19.176848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.148 [2024-11-26 07:37:19.176881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.148 [2024-11-26 07:37:19.176891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.148 [2024-11-26 07:37:19.189516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.148 [2024-11-26 07:37:19.189541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.148 [2024-11-26 07:37:19.189550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.148 [2024-11-26 07:37:19.197277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.148 [2024-11-26 07:37:19.197298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.148 [2024-11-26 07:37:19.197307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.148 [2024-11-26 07:37:19.209294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.148 [2024-11-26 07:37:19.209317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.148 [2024-11-26 07:37:19.209327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.149 [2024-11-26 07:37:19.219181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.149 [2024-11-26 07:37:19.219202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.149 [2024-11-26 07:37:19.219210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.149 [2024-11-26 07:37:19.228589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.149 [2024-11-26 07:37:19.228615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.149 [2024-11-26 07:37:19.228624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.149 [2024-11-26 07:37:19.238301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.149 [2024-11-26 07:37:19.238330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.149 [2024-11-26 07:37:19.238338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.249164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.249186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.249194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.258487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.258508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.258516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.269518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.269540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.269548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.279243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.279265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.279273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.288694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.288715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.288723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.298496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.298517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.298526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.308782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.308803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.308811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.317051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.317073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.317081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.328782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.328804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.328812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.337707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.337727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.337735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.346940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.346967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.346976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.356799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.356819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.356828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.366545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.366566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.366575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.376107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.376129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.376138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.386365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.386386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.386394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.395374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.408 [2024-11-26 07:37:19.395398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.408 [2024-11-26 07:37:19.395407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.408 [2024-11-26 07:37:19.405442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.409 [2024-11-26 07:37:19.405463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.409 [2024-11-26 07:37:19.405471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.409 [2024-11-26 07:37:19.414814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.409 [2024-11-26 07:37:19.414835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.409 [2024-11-26 07:37:19.414843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.409 [2024-11-26 07:37:19.424189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.409 [2024-11-26 07:37:19.424209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.409 [2024-11-26 07:37:19.424219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.409 [2024-11-26 07:37:19.433366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.409 [2024-11-26 07:37:19.433389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.409 [2024-11-26 07:37:19.433397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.409 [2024-11-26 07:37:19.442501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.409 [2024-11-26 07:37:19.442522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.409 [2024-11-26 07:37:19.442531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.409 [2024-11-26 07:37:19.453845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.409 [2024-11-26 07:37:19.453867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.409 [2024-11-26 07:37:19.453875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.409 [2024-11-26 07:37:19.461709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.409 [2024-11-26 07:37:19.461730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.409 [2024-11-26 07:37:19.461739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.409 [2024-11-26 07:37:19.473501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.409 [2024-11-26 07:37:19.473522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.409 [2024-11-26 07:37:19.473531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.409 [2024-11-26 07:37:19.486509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.409 [2024-11-26 07:37:19.486531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.409 [2024-11-26 07:37:19.486539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.409 [2024-11-26 07:37:19.498816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.409 [2024-11-26 07:37:19.498837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.409 [2024-11-26 07:37:19.498846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.668 [2024-11-26 07:37:19.509448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.668 [2024-11-26 07:37:19.509470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.668 [2024-11-26 07:37:19.509478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.668 [2024-11-26 07:37:19.520769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.668 [2024-11-26 07:37:19.520791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.668 [2024-11-26 07:37:19.520799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.668 [2024-11-26 07:37:19.529527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.529547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.529556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.540890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.540912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.540920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.550009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.550029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.550037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.559272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.559293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.559301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.569118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.569140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.569156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.578041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.578062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.578071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.588160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.588182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.588191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.599630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.599651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.599659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.607812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.607834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.607842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.619973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.619994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.620002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.632197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.632218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.632226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.643590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.643611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.643620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.653032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.653053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.653061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.665122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.665147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.665156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.673719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.673741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.673749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.684165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.684186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.684194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.693055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.693076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.693084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.705390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.705414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.705422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.715094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.715116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.715124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.725592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.725614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.725622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.734383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.734407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.734416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.744193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.744215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.744223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.669 [2024-11-26 07:37:19.753316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.669 [2024-11-26 07:37:19.753338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.669 [2024-11-26 07:37:19.753346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.928 [2024-11-26 07:37:19.763360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.928 [2024-11-26 07:37:19.763382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.928 [2024-11-26 07:37:19.763391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.928 [2024-11-26 07:37:19.771998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.928 [2024-11-26 07:37:19.772020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.928 [2024-11-26 07:37:19.772028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.928 [2024-11-26 07:37:19.783061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.928 [2024-11-26 07:37:19.783083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.928 [2024-11-26 07:37:19.783092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.928 [2024-11-26 07:37:19.796087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.796110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.796118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.806379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.806400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.806408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.815146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.815168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.815176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.826157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.826178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.826187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.837158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.837180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.837192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.845428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.845449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.845458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.856930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.856957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.856965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.868560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.868582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.868590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.881177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.881211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.881219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.893847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.893870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.893879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.901861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.901882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.901890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.912918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.912940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.912954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.922742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.922764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.922772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.933698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.933721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.933729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.941557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.941578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.941587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.952981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.953003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.953011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.963001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.963022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.963030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.971808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.971831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.971841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.981807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.981829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.981837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:19.991873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:19.991895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.929 [2024-11-26 07:37:19.991903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.929 [2024-11-26 07:37:20.001764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.929 [2024-11-26 07:37:20.001786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.930 [2024-11-26 07:37:20.001794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.930 [2024-11-26 07:37:20.010974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.930 [2024-11-26 07:37:20.010996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.930 [2024-11-26 07:37:20.011009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:51.930 [2024-11-26 07:37:20.021366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:51.930 [2024-11-26 07:37:20.021388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.930 [2024-11-26 07:37:20.021397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.031407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.031430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.031439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.042736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.042758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.042766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.053018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.053039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.053048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.063769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.063814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.063830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.072254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.072276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.072285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.083977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.083999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.084008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.095506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.095527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.095537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.103957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.103983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.103992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.116299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.116321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.116330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.127971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.127994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.128002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.140745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.140767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.140776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.148878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.148900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.148908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.161048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.161072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.161080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 24811.00 IOPS, 96.92 MiB/s [2024-11-26T06:37:20.289Z] [2024-11-26 07:37:20.174559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.174581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.189 [2024-11-26 07:37:20.174590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.189 [2024-11-26 07:37:20.185988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.189 [2024-11-26 07:37:20.186010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.190 [2024-11-26 07:37:20.186018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.190 [2024-11-26 07:37:20.194249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.190 [2024-11-26 07:37:20.194270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.190 [2024-11-26 07:37:20.194278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.190 [2024-11-26 07:37:20.206752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.190 [2024-11-26 07:37:20.206775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.190 [2024-11-26 07:37:20.206783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.190 [2024-11-26 07:37:20.219127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.190 [2024-11-26 07:37:20.219148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.190 [2024-11-26 07:37:20.219157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.190 [2024-11-26 07:37:20.230334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.190 [2024-11-26 07:37:20.230355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.190 [2024-11-26 07:37:20.230364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.190 [2024-11-26 07:37:20.240276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.190 [2024-11-26 07:37:20.240297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.190 [2024-11-26 07:37:20.240305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.190 [2024-11-26 07:37:20.250512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.190 [2024-11-26 07:37:20.250533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.190 [2024-11-26 07:37:20.250542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.190 [2024-11-26 07:37:20.261130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.190 [2024-11-26 07:37:20.261152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.190 [2024-11-26 07:37:20.261160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.190 [2024-11-26 07:37:20.269055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.190 [2024-11-26 07:37:20.269076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.190 [2024-11-26 07:37:20.269084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.190 [2024-11-26 07:37:20.279903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.190 [2024-11-26 07:37:20.279924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.190 [2024-11-26 07:37:20.279933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.290367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.290392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.290400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.302235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.302255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.302263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.314752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.314772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.314780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.323824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.323846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.323854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.336034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.336055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.336064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.347925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.347945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.347960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.357112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.357133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.357141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.368965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.369003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.369011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.380339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.380359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.380366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.389089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.389110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.389118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.400474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.400495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.400504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.411079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.411100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.411109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.419486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.419507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.419516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.429325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.429346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.429354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.438749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.438771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.438780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.447839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.447860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.447869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.457341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.457362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.457371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.466009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.466030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.466042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.477190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.477211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.477219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.488464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.488484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.488493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.496661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.496682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.450 [2024-11-26 07:37:20.496691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.450 [2024-11-26 07:37:20.509690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.450 [2024-11-26 07:37:20.509712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.451 [2024-11-26 07:37:20.509720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.451 [2024-11-26 07:37:20.522242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.451 [2024-11-26 07:37:20.522264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.451 [2024-11-26 07:37:20.522272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.451 [2024-11-26 07:37:20.533125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.451 [2024-11-26 07:37:20.533146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.451 [2024-11-26 07:37:20.533154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.451 [2024-11-26 07:37:20.542743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.451 [2024-11-26 07:37:20.542764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.451 [2024-11-26 07:37:20.542772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.710 [2024-11-26 07:37:20.554619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.710 [2024-11-26 07:37:20.554640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.710 [2024-11-26 07:37:20.554648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.710 [2024-11-26 07:37:20.564059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.710 [2024-11-26 07:37:20.564084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.710 [2024-11-26 07:37:20.564092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.710 [2024-11-26 07:37:20.573211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.710 [2024-11-26 07:37:20.573233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.710 [2024-11-26 07:37:20.573241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.710 [2024-11-26 07:37:20.582918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.710 [2024-11-26 07:37:20.582941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.710 [2024-11-26 07:37:20.582954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.710 [2024-11-26 07:37:20.594304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.710 [2024-11-26 07:37:20.594326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.710 [2024-11-26 07:37:20.594335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.710 [2024-11-26 07:37:20.602642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.710 [2024-11-26 07:37:20.602666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.710 [2024-11-26 07:37:20.602675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.710 [2024-11-26 07:37:20.612903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.710 [2024-11-26 07:37:20.612925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.710 [2024-11-26 07:37:20.612933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.710 [2024-11-26 07:37:20.622359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.710 [2024-11-26 07:37:20.622380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.710 [2024-11-26 07:37:20.622388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.710 [2024-11-26 07:37:20.632684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.710 [2024-11-26 07:37:20.632705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.710 [2024-11-26 07:37:20.632713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.710 [2024-11-26 07:37:20.641429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.710 [2024-11-26 07:37:20.641450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.710 [2024-11-26 07:37:20.641458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.710 [2024-11-26 07:37:20.651305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.710 [2024-11-26 07:37:20.651326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.651334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.660882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.660904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.660912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.670690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.670711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.670719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.679555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.679576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.679586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.689929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.689956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.689965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.699642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.699663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.699671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.707882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.707903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.707912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.718726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.718746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.718754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.728436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.728456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.728468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.737644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.737664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.737671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.746841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.746862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.746871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.758824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.758847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.758856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.771439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.771460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.771468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.779658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.779679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.779687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.791537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.791558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.791566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.711 [2024-11-26 07:37:20.803279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.711 [2024-11-26 07:37:20.803300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.711 [2024-11-26 07:37:20.803308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.970 [2024-11-26 07:37:20.812118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.812138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.812146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.824164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.824185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.824193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.832739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.832759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.832767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.844385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.844405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.844413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.852614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.852634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.852642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.862740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.862762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.862770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.873766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.873786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.873793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.883906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.883927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.883935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.893647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.893669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.893677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.902256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.902276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.902290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.911823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.911844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.911852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.922469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.922490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.922498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.931853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.931873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.931881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.940861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.940882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.940890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.950187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.950208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.950216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.959263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.959284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.959292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.969435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.969456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.969464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.978882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.978904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.978912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.987556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.987580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.987588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:20.998931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:20.998959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:20.998967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:21.008673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:21.008694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:21.008703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:21.017659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:21.017680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:21.017688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:21.027250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:21.027271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:21.027280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:21.037180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:21.037201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:21.037209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:21.045880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:21.045901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:21.045909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:52.971 [2024-11-26 07:37:21.058091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:52.971 [2024-11-26 07:37:21.058113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.971 [2024-11-26 07:37:21.058121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 [2024-11-26 07:37:21.069918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:53.231 [2024-11-26 07:37:21.069939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.231 [2024-11-26 07:37:21.069952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 [2024-11-26 07:37:21.079152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:53.231 [2024-11-26 07:37:21.079173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.231 [2024-11-26 07:37:21.079181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 [2024-11-26 07:37:21.089753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:53.231 [2024-11-26 07:37:21.089774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.231 [2024-11-26 07:37:21.089782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 [2024-11-26 07:37:21.098287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:53.231 [2024-11-26 07:37:21.098307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.231 [2024-11-26 07:37:21.098315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 [2024-11-26 07:37:21.108330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:53.231 [2024-11-26 07:37:21.108351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.231 [2024-11-26 07:37:21.108360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 [2024-11-26 07:37:21.117757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:53.231 [2024-11-26 07:37:21.117778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.231 [2024-11-26 07:37:21.117786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 [2024-11-26 07:37:21.128093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:53.231 [2024-11-26 07:37:21.128114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.231 [2024-11-26 07:37:21.128123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 [2024-11-26 07:37:21.137614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:53.231 [2024-11-26 07:37:21.137635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.231 [2024-11-26 07:37:21.137643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 [2024-11-26 07:37:21.147048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:53.231 [2024-11-26 07:37:21.147068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.231 [2024-11-26 07:37:21.147077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 [2024-11-26 07:37:21.156003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:53.231 [2024-11-26 07:37:21.156023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.231 [2024-11-26 07:37:21.156034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 [2024-11-26 07:37:21.165433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19ea390) 00:27:53.231 [2024-11-26 07:37:21.165453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.231 [2024-11-26 07:37:21.165462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:53.231 25023.00 IOPS, 97.75 MiB/s 00:27:53.231 Latency(us) 00:27:53.231 [2024-11-26T06:37:21.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.231 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:53.231 nvme0n1 : 2.00 25044.62 97.83 0.00 0.00 5106.04 2379.24 15956.59 00:27:53.231 [2024-11-26T06:37:21.331Z] =================================================================================================================== 00:27:53.231 [2024-11-26T06:37:21.331Z] Total : 25044.62 97.83 0.00 0.00 5106.04 2379.24 15956.59 00:27:53.231 { 00:27:53.231 "results": [ 00:27:53.231 { 00:27:53.231 "job": "nvme0n1", 00:27:53.231 "core_mask": "0x2", 00:27:53.231 "workload": "randread", 00:27:53.231 "status": "finished", 00:27:53.231 "queue_depth": 128, 00:27:53.231 "io_size": 4096, 00:27:53.231 "runtime": 2.003384, 00:27:53.231 "iops": 25044.624495353863, 00:27:53.231 "mibps": 97.83056443497603, 00:27:53.231 "io_failed": 0, 00:27:53.231 "io_timeout": 0, 00:27:53.231 "avg_latency_us": 5106.043738295081, 00:27:53.231 "min_latency_us": 2379.241739130435, 00:27:53.231 "max_latency_us": 15956.591304347827 00:27:53.231 } 00:27:53.231 ], 00:27:53.231 "core_count": 1 00:27:53.231 } 00:27:53.231 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:53.231 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:53.231 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:53.231 | .driver_specific 00:27:53.231 | .nvme_error 00:27:53.231 | .status_code 00:27:53.231 | .command_transient_transport_error' 00:27:53.231 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 196 > 0 )) 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 883726 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 883726 ']' 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 883726 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883726 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883726' 00:27:53.491 killing process with pid 883726 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 883726 00:27:53.491 Received shutdown signal, test time was about 2.000000 seconds 00:27:53.491 00:27:53.491 Latency(us) 00:27:53.491 [2024-11-26T06:37:21.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.491 [2024-11-26T06:37:21.591Z] =================================================================================================================== 00:27:53.491 [2024-11-26T06:37:21.591Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:53.491 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 883726 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=884384 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 884384 /var/tmp/bperf.sock 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 884384 ']' 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:53.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:53.750 [2024-11-26 07:37:21.646994] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:27:53.750 [2024-11-26 07:37:21.647044] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884384 ] 00:27:53.750 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:53.750 Zero copy mechanism will not be used. 00:27:53.750 [2024-11-26 07:37:21.709037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.750 [2024-11-26 07:37:21.751509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:53.750 07:37:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:54.009 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:54.009 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.009 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:54.009 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.009 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:54.009 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:54.577 nvme0n1 00:27:54.577 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:54.577 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.577 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:54.577 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.577 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:54.577 07:37:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:54.577 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:54.577 Zero copy mechanism will not be used. 00:27:54.577 Running I/O for 2 seconds... 00:27:54.578 [2024-11-26 07:37:22.599477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.578 [2024-11-26 07:37:22.599514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.578 [2024-11-26 07:37:22.599524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.578 [2024-11-26 07:37:22.605862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.578 [2024-11-26 07:37:22.605889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.578 [2024-11-26 07:37:22.605898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.578 [2024-11-26 07:37:22.612378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.578 [2024-11-26 07:37:22.612402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.578 [2024-11-26 07:37:22.612411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.578 [2024-11-26 07:37:22.616804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.578 [2024-11-26 07:37:22.616828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.578 [2024-11-26 07:37:22.616836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.578 [2024-11-26 07:37:22.624698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.578 [2024-11-26 07:37:22.624722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.578 [2024-11-26 07:37:22.624731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.578 [2024-11-26 07:37:22.633032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.578 [2024-11-26 07:37:22.633055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.578 [2024-11-26 07:37:22.633065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.578 [2024-11-26 07:37:22.640392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.578 [2024-11-26 07:37:22.640417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.578 [2024-11-26 07:37:22.640425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.578 [2024-11-26 07:37:22.648278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.578 [2024-11-26 07:37:22.648302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.578 [2024-11-26 07:37:22.648311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.578 [2024-11-26 07:37:22.656676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.578 [2024-11-26 07:37:22.656701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.578 [2024-11-26 07:37:22.656711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.578 [2024-11-26 07:37:22.664490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.578 [2024-11-26 07:37:22.664514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.578 [2024-11-26 07:37:22.664523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.838 [2024-11-26 07:37:22.672190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.838 [2024-11-26 07:37:22.672214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.838 [2024-11-26 07:37:22.672224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.838 [2024-11-26 07:37:22.680594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.838 [2024-11-26 07:37:22.680618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.838 [2024-11-26 07:37:22.680626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.838 [2024-11-26 07:37:22.688448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.838 [2024-11-26 07:37:22.688470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.688479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.696357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.696380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.696389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.703807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.703831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.703843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.711294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.711316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.711325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.719292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.719315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.719323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.725305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.725329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.725338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.731077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.731099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.731107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.737096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.737120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.737128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.743044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.743066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.743074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.749075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.749098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.749106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.754979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.755003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.755013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.760923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.760957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.760966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.766710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.766733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.766741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.772470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.772492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.772501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.778138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.778160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.778168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.783834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.783857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.783867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.789534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.789557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.789565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.795241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.795266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.795275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.800920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.800944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.800960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.806684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.806706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.806715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.812309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.812333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.812342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.818022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.818044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.818055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.823629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.823652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.823661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.829282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.829305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.829313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.834920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.839 [2024-11-26 07:37:22.834943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.839 [2024-11-26 07:37:22.834959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.839 [2024-11-26 07:37:22.840442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.840465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.840473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.845996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.846017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.846025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.851545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.851566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.851575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.857237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.857260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.857278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.863041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.863064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.863072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.868772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.868794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.868804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.874393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.874416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.874426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.880044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.880068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.880077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.885600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.885622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.885631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.891082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.891104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.891114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.896556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.896577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.896585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.901972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.901993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.902001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.907433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.907458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.907466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.912853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.912875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.912883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.918302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.918324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.918331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.923682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.923705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.923713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.840 [2024-11-26 07:37:22.929230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:54.840 [2024-11-26 07:37:22.929252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.840 [2024-11-26 07:37:22.929260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.934773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.934796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.934804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.940375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.940396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.940404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.945806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.945827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.945835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.951251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.951272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.951280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.956687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.956709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.956717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.962098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.962118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.962126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.967524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.967546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.967554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.972802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.972824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.972832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.978373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.978395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.978403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.983797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.983818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.983826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.989213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.989233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.989241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.994483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.994506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.994514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:22.999799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:22.999822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:22.999833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:23.005177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:23.005200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:23.005208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:23.010726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:23.010747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:23.010755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:23.016162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:23.016183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:23.016191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:23.021570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:23.021592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:23.021600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:23.027067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:23.027088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:23.027097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:23.032589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:23.032610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:23.032619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:23.037996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:23.038017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:23.038025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:23.043445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:23.043466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:23.043474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:23.048789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:23.048813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:23.048821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.101 [2024-11-26 07:37:23.054208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.101 [2024-11-26 07:37:23.054229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.101 [2024-11-26 07:37:23.054237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.059538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.059558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.059566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.065035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.065055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.065064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.070487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.070508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.070516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.075844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.075865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.075873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.080992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.081014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.081022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.086282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.086303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.086311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.091864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.091886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.091894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.097968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.097991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.097999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.104080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.104102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.104110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.111366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.111390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.111399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.119514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.119536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.119544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.127265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.127288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.127297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.134716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.134738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.134747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.142331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.142353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.142361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.150040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.150062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.150070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.157698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.157720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.157733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.165498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.165522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.165530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.173232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.173255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.173263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.181179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.181202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.181211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.102 [2024-11-26 07:37:23.188915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.102 [2024-11-26 07:37:23.188939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.102 [2024-11-26 07:37:23.188953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.196966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.196989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.196997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.205227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.205249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.205258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.213262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.213286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.213294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.219807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.219829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.219837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.227185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.227213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.227221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.233686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.233707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.233715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.239151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.239172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.239180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.244667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.244690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.244699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.250321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.250342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.250350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.255795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.255816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.255824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.261243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.261264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.261272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.266763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.266784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.266792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.272211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.272231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.272239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.277731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.277753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.277761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.283207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.283227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.283235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.288674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.288695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.288703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.294233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.294254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.294262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.299880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.299902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.299910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.305554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.305576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.305584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.311033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.311053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.311061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.316555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.316577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.316585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.322043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.364 [2024-11-26 07:37:23.322063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.364 [2024-11-26 07:37:23.322074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.364 [2024-11-26 07:37:23.327478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.327499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.327507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.332282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.332305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.332313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.337766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.337788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.337796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.343087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.343108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.343116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.348582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.348604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.348612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.354086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.354107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.354114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.359681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.359704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.359712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.365212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.365233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.365242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.370730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.370752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.370760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.376386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.376407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.376415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.381941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.381970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.381978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.387444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.387466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.387473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.392851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.392872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.392879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.398340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.398361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.398369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.403839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.403861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.403869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.409462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.409484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.409492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.414778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.414800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.414812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.420233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.420255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.420263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.425591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.425613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.425621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.430928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.430956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.430965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.436190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.436212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.436220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.441670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.441692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.441700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.447166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.447188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.447196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.365 [2024-11-26 07:37:23.452705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.365 [2024-11-26 07:37:23.452727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.365 [2024-11-26 07:37:23.452736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.458376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.458397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.458405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.463925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.463958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.463967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.469461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.469484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.469492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.475054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.475077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.475085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.480570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.480592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.480600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.486089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.486120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.486128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.491683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.491704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.491712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.497250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.497271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.497280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.502733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.502755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.502763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.508287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.508308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.508316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.513790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.513812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.513819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.519269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.519291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.519299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.524891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.524913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.524921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.530390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.530412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.530420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.535881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.535903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.535911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.541456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.541478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.541486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.546972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.546993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.547001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.552550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.552572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.552580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.558166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.558188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.558199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.563749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.563771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.563778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.568529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.568551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.626 [2024-11-26 07:37:23.568559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.626 [2024-11-26 07:37:23.574125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.626 [2024-11-26 07:37:23.574147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.574155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.579489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.579511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.579519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.584545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.584568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.584576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.590135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.590157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.590165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.627 5208.00 IOPS, 651.00 MiB/s [2024-11-26T06:37:23.727Z] [2024-11-26 07:37:23.596978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.597001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.597009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.602479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.602502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.602510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.608053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.608083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.608091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.613651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.613673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.613682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.619552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.619575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.619583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.625247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.625269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.625277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.630774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.630797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.630805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.636374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.636396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.636405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.642001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.642023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.642032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.647618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.647640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.647648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.653232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.653254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.653262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.658648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.658669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.658678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.664051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.664074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.664083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.669549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.669571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.669579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.675167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.675190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.675198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.680894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.680915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.680923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.686508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.686530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.686538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.692138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.692161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.692169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.697719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.697742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.697750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.703238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.703261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.703272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.709460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.709483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.709491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.627 [2024-11-26 07:37:23.717088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.627 [2024-11-26 07:37:23.717111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.627 [2024-11-26 07:37:23.717120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.887 [2024-11-26 07:37:23.725409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.887 [2024-11-26 07:37:23.725433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.887 [2024-11-26 07:37:23.725441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.887 [2024-11-26 07:37:23.733571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.887 [2024-11-26 07:37:23.733594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.887 [2024-11-26 07:37:23.733603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.887 [2024-11-26 07:37:23.741500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.887 [2024-11-26 07:37:23.741524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.887 [2024-11-26 07:37:23.741533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.887 [2024-11-26 07:37:23.749612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.887 [2024-11-26 07:37:23.749636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.887 [2024-11-26 07:37:23.749645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.887 [2024-11-26 07:37:23.757559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.887 [2024-11-26 07:37:23.757581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.887 [2024-11-26 07:37:23.757590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.887 [2024-11-26 07:37:23.766307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.887 [2024-11-26 07:37:23.766328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.887 [2024-11-26 07:37:23.766337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.887 [2024-11-26 07:37:23.774462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.887 [2024-11-26 07:37:23.774490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.774498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.782447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.782470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.782478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.790780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.790803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.790812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.798758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.798781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.798791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.807495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.807521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.807530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.815778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.815801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.815810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.823669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.823693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.823702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.830425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.830448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.830456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.835965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.835989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.835998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.841493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.841516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.841524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.845158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.845180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.845189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.849568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.849591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.849599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.855001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.855024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.855032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.860371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.860394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.860402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.865734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.865756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.865764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.871079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.871102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.871110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.876516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.876538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.876546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.882130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.882153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.882165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.887821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.887842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.887850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.893534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.893556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.893565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.899055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.899076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.899084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.904584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.904606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.904614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.910083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.910105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.910113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.915650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.915672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.915679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.921218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.921239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.921247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.926924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.926946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.926960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.932494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.888 [2024-11-26 07:37:23.932516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.888 [2024-11-26 07:37:23.932524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.888 [2024-11-26 07:37:23.938265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.889 [2024-11-26 07:37:23.938287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-11-26 07:37:23.938296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.889 [2024-11-26 07:37:23.944060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.889 [2024-11-26 07:37:23.944082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-11-26 07:37:23.944091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.889 [2024-11-26 07:37:23.949816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.889 [2024-11-26 07:37:23.949837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-11-26 07:37:23.949845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.889 [2024-11-26 07:37:23.955590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.889 [2024-11-26 07:37:23.955613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-11-26 07:37:23.955621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.889 [2024-11-26 07:37:23.961323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.889 [2024-11-26 07:37:23.961344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-11-26 07:37:23.961352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.889 [2024-11-26 07:37:23.966969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.889 [2024-11-26 07:37:23.966990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-11-26 07:37:23.966998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.889 [2024-11-26 07:37:23.972365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.889 [2024-11-26 07:37:23.972387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-11-26 07:37:23.972394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.889 [2024-11-26 07:37:23.977714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:55.889 [2024-11-26 07:37:23.977737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.889 [2024-11-26 07:37:23.977748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.149 [2024-11-26 07:37:23.983324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.149 [2024-11-26 07:37:23.983346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.149 [2024-11-26 07:37:23.983354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.149 [2024-11-26 07:37:23.989093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.149 [2024-11-26 07:37:23.989115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.149 [2024-11-26 07:37:23.989123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.149 [2024-11-26 07:37:23.994882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.149 [2024-11-26 07:37:23.994905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.149 [2024-11-26 07:37:23.994913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.149 [2024-11-26 07:37:24.000444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.149 [2024-11-26 07:37:24.000466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.149 [2024-11-26 07:37:24.000474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.149 [2024-11-26 07:37:24.006051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.149 [2024-11-26 07:37:24.006082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.149 [2024-11-26 07:37:24.006090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.149 [2024-11-26 07:37:24.011725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.149 [2024-11-26 07:37:24.011746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.149 [2024-11-26 07:37:24.011754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.149 [2024-11-26 07:37:24.017426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.149 [2024-11-26 07:37:24.017447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.149 [2024-11-26 07:37:24.017455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.149 [2024-11-26 07:37:24.022852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.149 [2024-11-26 07:37:24.022874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.149 [2024-11-26 07:37:24.022882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.149 [2024-11-26 07:37:24.028274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.149 [2024-11-26 07:37:24.028299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.149 [2024-11-26 07:37:24.028308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.149 [2024-11-26 07:37:24.033705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.149 [2024-11-26 07:37:24.033727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.149 [2024-11-26 07:37:24.033734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.039168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.039190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.039199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.044621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.044643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.044651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.050048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.050071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.050080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.055435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.055457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.055465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.060841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.060864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.060873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.066475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.066497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.066505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.071731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.071754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.071762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.077166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.077189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.077197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.083691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.083714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.083726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.089492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.089514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.089523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.096757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.096779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.096787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.104364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.104386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.104394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.111026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.111049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.111058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.117359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.117385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.117395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.123799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.123823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.123832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.131496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.131519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.131532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.137856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.137880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.137888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.144799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.144822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.150 [2024-11-26 07:37:24.144831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.150 [2024-11-26 07:37:24.150490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.150 [2024-11-26 07:37:24.150512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.150520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.156192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.156215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.156223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.161906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.161928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.161936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.167625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.167648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.167655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.173140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.173162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.173170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.178678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.178700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.178708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.184137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.184163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.184171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.189625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.189648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.189655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.195121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.195156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.195164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.200504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.200526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.200534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.205816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.205839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.205848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.211126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.211148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.211156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.216342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.216368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.216376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.221603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.221625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.221634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.226869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.226890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.226898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.232145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.232167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.232175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.151 [2024-11-26 07:37:24.237368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.151 [2024-11-26 07:37:24.237391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.151 [2024-11-26 07:37:24.237399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.242654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.242676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.242684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.247899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.247922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.247930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.253034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.253055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.253063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.258262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.258284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.258293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.263499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.263521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.263528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.268752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.268773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.268782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.273920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.273943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.273962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.278723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.278746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.278754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.283984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.284006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.284014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.289216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.289238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.289246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.294484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.294507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.294516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.299736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.299759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.299767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.304997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.305019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.305027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.310203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.310225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.310233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.315449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.315471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.315479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.320653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.320680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.320688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.325789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.325811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.325819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.331021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.331044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.331052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.336333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.336355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.336363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.341677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.341698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.341706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.347098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.347120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.347128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.352655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.352679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.352687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.358292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.358315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.358323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.363613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.363635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.363644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.369003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.369026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.412 [2024-11-26 07:37:24.369035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.412 [2024-11-26 07:37:24.374569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.412 [2024-11-26 07:37:24.374591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.374599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.380138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.380160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.380168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.385685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.385708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.385717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.391168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.391191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.391198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.396678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.396700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.396707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.402136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.402158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.402165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.407588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.407610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.407618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.413123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.413146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.413157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.418872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.418895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.418903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.424511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.424534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.424542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.430010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.430032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.430040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.435453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.435476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.435483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.440900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.440923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.440931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.446673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.446695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.446703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.452546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.452573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.452583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.458114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.458136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.458144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.463664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.463687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.463697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.469387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.469409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.469417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.474968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.474990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.474998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.480531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.480553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.480561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.485895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.485917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.485925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.491477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.491499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.491507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.496910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.496938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.496946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.413 [2024-11-26 07:37:24.502471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.413 [2024-11-26 07:37:24.502500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.413 [2024-11-26 07:37:24.502508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.508404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.508426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.508437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.514298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.514320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.514329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.520180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.520202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.520210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.525861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.525883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.525892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.531554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.531576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.531584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.537300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.537321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.537330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.543026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.543046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.543054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.548958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.548980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.548988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.555073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.555096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.555104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.561179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.561205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.561213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.566790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.566812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.566821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.572284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.572307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.572315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.577885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.577907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.577915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.583390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.583412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.583420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.588999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.589020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.589028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.673 [2024-11-26 07:37:24.594791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9205a0) 00:27:56.673 [2024-11-26 07:37:24.594814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.673 [2024-11-26 07:37:24.594822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.673 5275.50 IOPS, 659.44 MiB/s 00:27:56.673 Latency(us) 00:27:56.673 [2024-11-26T06:37:24.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.673 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:56.673 nvme0n1 : 2.00 5277.77 659.72 0.00 0.00 3029.02 911.81 10770.70 00:27:56.673 [2024-11-26T06:37:24.773Z] =================================================================================================================== 00:27:56.673 [2024-11-26T06:37:24.773Z] Total : 5277.77 659.72 0.00 0.00 3029.02 911.81 10770.70 00:27:56.673 { 00:27:56.673 "results": [ 00:27:56.673 { 00:27:56.673 "job": "nvme0n1", 00:27:56.673 "core_mask": "0x2", 00:27:56.673 "workload": "randread", 00:27:56.673 "status": "finished", 00:27:56.673 "queue_depth": 16, 00:27:56.673 "io_size": 131072, 00:27:56.673 "runtime": 2.002173, 00:27:56.673 "iops": 5277.765707558738, 00:27:56.673 "mibps": 659.7207134448422, 00:27:56.673 "io_failed": 0, 00:27:56.673 "io_timeout": 0, 00:27:56.673 "avg_latency_us": 3029.02369229883, 00:27:56.673 "min_latency_us": 911.8052173913044, 00:27:56.673 "max_latency_us": 10770.699130434783 00:27:56.673 } 00:27:56.673 ], 00:27:56.673 "core_count": 1 00:27:56.673 } 00:27:56.673 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:56.673 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:56.673 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:56.673 | .driver_specific 00:27:56.673 | .nvme_error 00:27:56.673 | .status_code 00:27:56.673 | .command_transient_transport_error' 00:27:56.673 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 341 > 0 )) 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 884384 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 884384 ']' 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 884384 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 884384 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 884384' 00:27:56.933 killing process with pid 884384 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 884384 00:27:56.933 Received shutdown signal, test time was about 2.000000 seconds 00:27:56.933 00:27:56.933 Latency(us) 00:27:56.933 [2024-11-26T06:37:25.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.933 [2024-11-26T06:37:25.033Z] =================================================================================================================== 00:27:56.933 [2024-11-26T06:37:25.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:56.933 07:37:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 884384 00:27:56.933 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:56.933 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:56.933 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:56.933 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=884868 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 884868 /var/tmp/bperf.sock 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 884868 ']' 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:57.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.192 [2024-11-26 07:37:25.072511] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:27:57.192 [2024-11-26 07:37:25.072561] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884868 ] 00:27:57.192 [2024-11-26 07:37:25.135259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.192 [2024-11-26 07:37:25.172874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:57.192 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:57.451 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:57.451 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.451 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.451 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.451 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.451 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.710 nvme0n1 00:27:57.710 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:57.710 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.710 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.710 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.710 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:57.710 07:37:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:57.968 Running I/O for 2 seconds... 00:27:57.968 [2024-11-26 07:37:25.851577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166df550 00:27:57.968 [2024-11-26 07:37:25.852492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.852525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.968 [2024-11-26 07:37:25.860972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f3a28 00:27:57.968 [2024-11-26 07:37:25.861871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.861896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:57.968 [2024-11-26 07:37:25.871331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166dfdc0 00:27:57.968 [2024-11-26 07:37:25.872792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.872813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:57.968 [2024-11-26 07:37:25.881019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166efae0 00:27:57.968 [2024-11-26 07:37:25.882486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.882507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:57.968 [2024-11-26 07:37:25.888374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f92c0 00:27:57.968 [2024-11-26 07:37:25.889370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.889390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:57.968 [2024-11-26 07:37:25.897629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f57b0 00:27:57.968 [2024-11-26 07:37:25.898933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.898960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:57.968 [2024-11-26 07:37:25.907330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f1430 00:27:57.968 [2024-11-26 07:37:25.908427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.908446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:57.968 [2024-11-26 07:37:25.915292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e1b48 00:27:57.968 [2024-11-26 07:37:25.915895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.915914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:57.968 [2024-11-26 07:37:25.924761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f1ca0 00:27:57.968 [2024-11-26 07:37:25.925484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.925505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:57.968 [2024-11-26 07:37:25.934361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e0630 00:27:57.968 [2024-11-26 07:37:25.935203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.935223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:57.968 [2024-11-26 07:37:25.943984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f2510 00:27:57.968 [2024-11-26 07:37:25.944939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.944967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:57.968 [2024-11-26 07:37:25.953610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166dfdc0 00:27:57.968 [2024-11-26 07:37:25.954690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.968 [2024-11-26 07:37:25.954711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:57.969 [2024-11-26 07:37:25.961355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f96f8 00:27:57.969 [2024-11-26 07:37:25.961797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.969 [2024-11-26 07:37:25.961816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:57.969 [2024-11-26 07:37:25.971847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e73e0 00:27:57.969 [2024-11-26 07:37:25.973062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.969 [2024-11-26 07:37:25.973084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:57.969 [2024-11-26 07:37:25.980364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f8618 00:27:57.969 [2024-11-26 07:37:25.981185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.969 [2024-11-26 07:37:25.981205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:57.969 [2024-11-26 07:37:25.989695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166de8a8 00:27:57.969 [2024-11-26 07:37:25.990406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.969 [2024-11-26 07:37:25.990428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:57.969 [2024-11-26 07:37:25.999067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fef90 00:27:57.969 [2024-11-26 07:37:26.000033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.969 [2024-11-26 07:37:26.000053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:57.969 [2024-11-26 07:37:26.007592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e7818 00:27:57.969 [2024-11-26 07:37:26.008547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.969 [2024-11-26 07:37:26.008566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:57.969 [2024-11-26 07:37:26.017354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e7818 00:27:57.969 [2024-11-26 07:37:26.018313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.969 [2024-11-26 07:37:26.018336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:57.969 [2024-11-26 07:37:26.026503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e7818 00:27:57.969 [2024-11-26 07:37:26.027462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.969 [2024-11-26 07:37:26.027482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:57.969 [2024-11-26 07:37:26.035674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e7818 00:27:57.969 [2024-11-26 07:37:26.036704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.969 [2024-11-26 07:37:26.036724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:57.969 [2024-11-26 07:37:26.044816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e7818 00:27:57.969 [2024-11-26 07:37:26.045871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.969 [2024-11-26 07:37:26.045890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:57.969 [2024-11-26 07:37:26.053969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e7818 00:27:57.969 [2024-11-26 07:37:26.055012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:57.969 [2024-11-26 07:37:26.055032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:58.227 [2024-11-26 07:37:26.063324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e7818 00:27:58.227 [2024-11-26 07:37:26.064391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.227 [2024-11-26 07:37:26.064411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:58.227 [2024-11-26 07:37:26.072902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f3a28 00:27:58.227 [2024-11-26 07:37:26.073972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.227 [2024-11-26 07:37:26.073991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:58.227 [2024-11-26 07:37:26.080489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f4298 00:27:58.227 [2024-11-26 07:37:26.081225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.227 [2024-11-26 07:37:26.081244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:58.227 [2024-11-26 07:37:26.089961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e4140 00:27:58.227 [2024-11-26 07:37:26.090416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.227 [2024-11-26 07:37:26.090436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:58.227 [2024-11-26 07:37:26.099559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f8a50 00:27:58.227 [2024-11-26 07:37:26.100143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.227 [2024-11-26 07:37:26.100166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:58.227 [2024-11-26 07:37:26.109174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f7970 00:27:58.227 [2024-11-26 07:37:26.109866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.227 [2024-11-26 07:37:26.109885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:58.227 [2024-11-26 07:37:26.118077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fe2e8 00:27:58.227 [2024-11-26 07:37:26.119314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.227 [2024-11-26 07:37:26.119334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:58.227 [2024-11-26 07:37:26.125999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.227 [2024-11-26 07:37:26.126581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.126600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.135740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.136337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.136356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.144806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.145366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.145385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.153896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.154490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.154509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.163041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.163625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.163644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.172169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.172757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.172776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.181317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.181906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.181925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.190516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.191108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.191127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.199663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.200258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.200277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.208817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.209409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.209429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.217968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.218551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.218570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.227102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.227687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.227706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.236232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.236816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.236837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.245375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.245959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.245979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.254514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.255107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.255126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.263576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.264165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.264184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.272726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.273401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.273419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.281855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.282444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.282463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.291017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.291603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.291621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.300162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.300843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.300862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.309297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.309897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.309916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.228 [2024-11-26 07:37:26.318495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.228 [2024-11-26 07:37:26.319097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.228 [2024-11-26 07:37:26.319116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.488 [2024-11-26 07:37:26.327888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.488 [2024-11-26 07:37:26.328481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.488 [2024-11-26 07:37:26.328500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.488 [2024-11-26 07:37:26.337042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.488 [2024-11-26 07:37:26.337629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.488 [2024-11-26 07:37:26.337652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.488 [2024-11-26 07:37:26.346187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.488 [2024-11-26 07:37:26.346769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.488 [2024-11-26 07:37:26.346787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.488 [2024-11-26 07:37:26.355321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5ec8 00:27:58.488 [2024-11-26 07:37:26.355900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.488 [2024-11-26 07:37:26.355919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:58.488 [2024-11-26 07:37:26.364716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e0630 00:27:58.488 [2024-11-26 07:37:26.365173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.488 [2024-11-26 07:37:26.365193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:58.488 [2024-11-26 07:37:26.374573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f4b08 00:27:58.488 [2024-11-26 07:37:26.375134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.488 [2024-11-26 07:37:26.375154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:58.488 [2024-11-26 07:37:26.384172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f81e0 00:27:58.488 [2024-11-26 07:37:26.384859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.488 [2024-11-26 07:37:26.384878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:58.488 [2024-11-26 07:37:26.392836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e1b48 00:27:58.488 [2024-11-26 07:37:26.394078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.488 [2024-11-26 07:37:26.394096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:58.488 [2024-11-26 07:37:26.400733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e4140 00:27:58.488 [2024-11-26 07:37:26.401309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.401329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.410342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e6fa8 00:27:58.489 [2024-11-26 07:37:26.411036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.411055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.421236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e6fa8 00:27:58.489 [2024-11-26 07:37:26.422409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.422429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.429180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.429870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.429889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.438328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.439019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.439038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.447385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.448067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.448086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.456520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.457217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.457237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.465662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.466363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.466382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.474811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.475505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.475523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.483951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.484643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.484661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.493111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.493798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.493817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.502258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.502949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.502969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.511394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.512093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.512112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.520550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.521243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.521262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.529693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.530389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.530408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.538836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.539529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.539548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.547983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.548674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.548693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.557056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.557744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.557762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.566212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.566902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.566921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.489 [2024-11-26 07:37:26.575371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.489 [2024-11-26 07:37:26.576068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.489 [2024-11-26 07:37:26.576090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.749 [2024-11-26 07:37:26.584761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.749 [2024-11-26 07:37:26.585554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.749 [2024-11-26 07:37:26.585574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.749 [2024-11-26 07:37:26.594026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.749 [2024-11-26 07:37:26.594720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.749 [2024-11-26 07:37:26.594739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.749 [2024-11-26 07:37:26.603072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.749 [2024-11-26 07:37:26.603760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.749 [2024-11-26 07:37:26.603779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.749 [2024-11-26 07:37:26.612236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.749 [2024-11-26 07:37:26.612927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.749 [2024-11-26 07:37:26.612949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.749 [2024-11-26 07:37:26.621400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.749 [2024-11-26 07:37:26.622091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.749 [2024-11-26 07:37:26.622111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.749 [2024-11-26 07:37:26.630738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.749 [2024-11-26 07:37:26.631436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.749 [2024-11-26 07:37:26.631456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.749 [2024-11-26 07:37:26.639933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:58.749 [2024-11-26 07:37:26.640617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.749 [2024-11-26 07:37:26.640636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.749 [2024-11-26 07:37:26.649395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166efae0 00:27:58.749 [2024-11-26 07:37:26.650204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.749 [2024-11-26 07:37:26.650223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:58.749 [2024-11-26 07:37:26.659000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f0350 00:27:58.749 [2024-11-26 07:37:26.659926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.749 [2024-11-26 07:37:26.659950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:58.749 [2024-11-26 07:37:26.668608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ef270 00:27:58.749 [2024-11-26 07:37:26.669648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.749 [2024-11-26 07:37:26.669668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:58.749 [2024-11-26 07:37:26.678135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e6fa8 00:27:58.750 [2024-11-26 07:37:26.679290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.679310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.685880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e0630 00:27:58.750 [2024-11-26 07:37:26.686438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.686458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.695554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fdeb0 00:27:58.750 [2024-11-26 07:37:26.696330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.696349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.704926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e8d30 00:27:58.750 [2024-11-26 07:37:26.705936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.705960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.713426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e27f0 00:27:58.750 [2024-11-26 07:37:26.714649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.714669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.723028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fbcf0 00:27:58.750 [2024-11-26 07:37:26.724384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.724403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.731555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f4f40 00:27:58.750 [2024-11-26 07:37:26.732249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.732269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.741042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f92c0 00:27:58.750 [2024-11-26 07:37:26.741844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.741864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.750631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f31b8 00:27:58.750 [2024-11-26 07:37:26.751635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.751654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.759794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f96f8 00:27:58.750 [2024-11-26 07:37:26.760811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.760830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.769253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ed920 00:27:58.750 [2024-11-26 07:37:26.770460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.770479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.778880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e38d0 00:27:58.750 [2024-11-26 07:37:26.780155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.780175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.788474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e73e0 00:27:58.750 [2024-11-26 07:37:26.789920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.789940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.794911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f3a28 00:27:58.750 [2024-11-26 07:37:26.795492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.795512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.804516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e3060 00:27:58.750 [2024-11-26 07:37:26.805207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.805227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.814214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e6fa8 00:27:58.750 [2024-11-26 07:37:26.815156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.815179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.823831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166efae0 00:27:58.750 [2024-11-26 07:37:26.824953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.824972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:58.750 [2024-11-26 07:37:26.833432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166eaab8 00:27:58.750 [2024-11-26 07:37:26.834662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:58.750 [2024-11-26 07:37:26.834681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:59.009 27657.00 IOPS, 108.04 MiB/s [2024-11-26T06:37:27.109Z] [2024-11-26 07:37:26.843201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f8e88 00:27:59.009 [2024-11-26 07:37:26.844597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.844617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.852999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f7538 00:27:59.010 [2024-11-26 07:37:26.854389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.854409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.860953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f9f68 00:27:59.010 [2024-11-26 07:37:26.861881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.861902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.870098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f9f68 00:27:59.010 [2024-11-26 07:37:26.871030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.871049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.879252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f9f68 00:27:59.010 [2024-11-26 07:37:26.880187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.880206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.888729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f9f68 00:27:59.010 [2024-11-26 07:37:26.889723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.889742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.898081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f9f68 00:27:59.010 [2024-11-26 07:37:26.899024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.899045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.907176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f9f68 00:27:59.010 [2024-11-26 07:37:26.908194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.908214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.915630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f6cc8 00:27:59.010 [2024-11-26 07:37:26.916866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.916886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.925273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166eaab8 00:27:59.010 [2024-11-26 07:37:26.926632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.926652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.933161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ec408 00:27:59.010 [2024-11-26 07:37:26.933838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.933858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.942928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ec408 00:27:59.010 [2024-11-26 07:37:26.943616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.943636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.952195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ec408 00:27:59.010 [2024-11-26 07:37:26.952897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.952918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.961534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ec408 00:27:59.010 [2024-11-26 07:37:26.962221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.962241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.970655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166df550 00:27:59.010 [2024-11-26 07:37:26.971329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.971350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.980094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f7da8 00:27:59.010 [2024-11-26 07:37:26.980628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.980648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.989717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f8e88 00:27:59.010 [2024-11-26 07:37:26.990375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:26.990395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:26.999363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f4298 00:27:59.010 [2024-11-26 07:37:27.000138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:27.000159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:27.008058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e4de8 00:27:59.010 [2024-11-26 07:37:27.009419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:27.009440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:27.015935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fa7d8 00:27:59.010 [2024-11-26 07:37:27.016605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:27.016624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:27.025710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fa7d8 00:27:59.010 [2024-11-26 07:37:27.026381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:27.026402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:27.034893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fa7d8 00:27:59.010 [2024-11-26 07:37:27.035564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:27.035584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:27.044077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fa7d8 00:27:59.010 [2024-11-26 07:37:27.044730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:27.044750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:27.052636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ee190 00:27:59.010 [2024-11-26 07:37:27.053309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:27.053338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:27.062254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166efae0 00:27:59.010 [2024-11-26 07:37:27.063034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:27.063054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:27.071887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f57b0 00:27:59.010 [2024-11-26 07:37:27.072772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:27.072791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:27.082144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f7da8 00:27:59.010 [2024-11-26 07:37:27.083167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.010 [2024-11-26 07:37:27.083187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:59.010 [2024-11-26 07:37:27.090832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e4de8 00:27:59.011 [2024-11-26 07:37:27.091842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.011 [2024-11-26 07:37:27.091862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:59.011 [2024-11-26 07:37:27.100534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ee5c8 00:27:59.011 [2024-11-26 07:37:27.101722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.011 [2024-11-26 07:37:27.101742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:59.270 [2024-11-26 07:37:27.110444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fd640 00:27:59.270 [2024-11-26 07:37:27.111720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.270 [2024-11-26 07:37:27.111740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:59.270 [2024-11-26 07:37:27.118444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f4f40 00:27:59.270 [2024-11-26 07:37:27.119220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.270 [2024-11-26 07:37:27.119240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:59.270 [2024-11-26 07:37:27.127933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f35f0 00:27:59.270 [2024-11-26 07:37:27.128954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.270 [2024-11-26 07:37:27.128974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:59.270 [2024-11-26 07:37:27.138904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f35f0 00:27:59.270 [2024-11-26 07:37:27.140387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.270 [2024-11-26 07:37:27.140411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:59.270 [2024-11-26 07:37:27.147151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fcdd0 00:27:59.270 [2024-11-26 07:37:27.148184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.148204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.155939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166eff18 00:27:59.271 [2024-11-26 07:37:27.156983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.157002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.166003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166eff18 00:27:59.271 [2024-11-26 07:37:27.167031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.167050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.174802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166dfdc0 00:27:59.271 [2024-11-26 07:37:27.176185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.176205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.183206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e95a0 00:27:59.271 [2024-11-26 07:37:27.183869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.183889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.193410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fa7d8 00:27:59.271 [2024-11-26 07:37:27.194218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.194238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.203470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e99d8 00:27:59.271 [2024-11-26 07:37:27.204276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.204298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.215088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e9e10 00:27:59.271 [2024-11-26 07:37:27.216339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.216361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.223636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e5a90 00:27:59.271 [2024-11-26 07:37:27.224163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.224183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.233554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fb048 00:27:59.271 [2024-11-26 07:37:27.234195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.234216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.243486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f2510 00:27:59.271 [2024-11-26 07:37:27.244248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.244268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.252402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fe2e8 00:27:59.271 [2024-11-26 07:37:27.253723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.253743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.260294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fac10 00:27:59.271 [2024-11-26 07:37:27.260927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.260950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.270044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fac10 00:27:59.271 [2024-11-26 07:37:27.270667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.270686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.279495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f5378 00:27:59.271 [2024-11-26 07:37:27.279994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.280014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.289146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e1710 00:27:59.271 [2024-11-26 07:37:27.289760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.289780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.298789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f4b08 00:27:59.271 [2024-11-26 07:37:27.299523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.299543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.307459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f96f8 00:27:59.271 [2024-11-26 07:37:27.308742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.308762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.315361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fd640 00:27:59.271 [2024-11-26 07:37:27.316002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.316020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.324996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f7970 00:27:59.271 [2024-11-26 07:37:27.325736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.325755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.334743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f7970 00:27:59.271 [2024-11-26 07:37:27.335485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.335506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.343912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f7970 00:27:59.271 [2024-11-26 07:37:27.344656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.344675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.354267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f7970 00:27:59.271 [2024-11-26 07:37:27.355480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.271 [2024-11-26 07:37:27.355499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:59.271 [2024-11-26 07:37:27.364065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ed4e8 00:27:59.531 [2024-11-26 07:37:27.365425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.365446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.372734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166de470 00:27:59.531 [2024-11-26 07:37:27.373700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.373719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.381161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f8618 00:27:59.531 [2024-11-26 07:37:27.382134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.382157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.392111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f8618 00:27:59.531 [2024-11-26 07:37:27.393551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.393572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.398829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166df550 00:27:59.531 [2024-11-26 07:37:27.399593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.399613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.408494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ec408 00:27:59.531 [2024-11-26 07:37:27.409350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.409369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.418077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ec840 00:27:59.531 [2024-11-26 07:37:27.419048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.419067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.427835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ec840 00:27:59.531 [2024-11-26 07:37:27.428803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.428822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.437023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ec840 00:27:59.531 [2024-11-26 07:37:27.437988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.438008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.445487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e27f0 00:27:59.531 [2024-11-26 07:37:27.446746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.446766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.453369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f2510 00:27:59.531 [2024-11-26 07:37:27.453977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.453997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.463018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f2510 00:27:59.531 [2024-11-26 07:37:27.463621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.463641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.472217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f2510 00:27:59.531 [2024-11-26 07:37:27.472816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.472836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.481394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f2510 00:27:59.531 [2024-11-26 07:37:27.482003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.531 [2024-11-26 07:37:27.482024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:59.531 [2024-11-26 07:37:27.490840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fc560 00:27:59.532 [2024-11-26 07:37:27.491555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.491574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.500470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f7da8 00:27:59.532 [2024-11-26 07:37:27.501312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.501331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.509020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166eaab8 00:27:59.532 [2024-11-26 07:37:27.509720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.509740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.518754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f6020 00:27:59.532 [2024-11-26 07:37:27.519576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.519596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.528348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166fd208 00:27:59.532 [2024-11-26 07:37:27.529291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.529311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.537969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f46d0 00:27:59.532 [2024-11-26 07:37:27.539029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.539049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.545714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f5378 00:27:59.532 [2024-11-26 07:37:27.546179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.546198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.555331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f8a50 00:27:59.532 [2024-11-26 07:37:27.555901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.555921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.564944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ed4e8 00:27:59.532 [2024-11-26 07:37:27.565626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.565645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.573454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f4298 00:27:59.532 [2024-11-26 07:37:27.574718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.574738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.581345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e1710 00:27:59.532 [2024-11-26 07:37:27.581934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.581955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.592297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e1710 00:27:59.532 [2024-11-26 07:37:27.593354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.593372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.600243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166df550 00:27:59.532 [2024-11-26 07:37:27.600822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.600841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.609697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f2948 00:27:59.532 [2024-11-26 07:37:27.610385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.610403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:59.532 [2024-11-26 07:37:27.619291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e12d8 00:27:59.532 [2024-11-26 07:37:27.620135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.532 [2024-11-26 07:37:27.620157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.629185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f4b08 00:27:59.792 [2024-11-26 07:37:27.630138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.630157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.638832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e49b0 00:27:59.792 [2024-11-26 07:37:27.639880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.639900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.646582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e3d08 00:27:59.792 [2024-11-26 07:37:27.647032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.647054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.656437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f6cc8 00:27:59.792 [2024-11-26 07:37:27.657012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.657032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.666050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f2d80 00:27:59.792 [2024-11-26 07:37:27.666725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.666744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.674704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ee190 00:27:59.792 [2024-11-26 07:37:27.675955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.675974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.682583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166de8a8 00:27:59.792 [2024-11-26 07:37:27.683160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.683179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.692346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166de8a8 00:27:59.792 [2024-11-26 07:37:27.692914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.692934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.701756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e99d8 00:27:59.792 [2024-11-26 07:37:27.702203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.702222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.711295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e38d0 00:27:59.792 [2024-11-26 07:37:27.711847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.711867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.720907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f7da8 00:27:59.792 [2024-11-26 07:37:27.721581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.721600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.729804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166ebb98 00:27:59.792 [2024-11-26 07:37:27.731039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.731059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.738330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e4140 00:27:59.792 [2024-11-26 07:37:27.738898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.738917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.749571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e27f0 00:27:59.792 [2024-11-26 07:37:27.750732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.750752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.757330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e49b0 00:27:59.792 [2024-11-26 07:37:27.757872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.757891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.766918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e3060 00:27:59.792 [2024-11-26 07:37:27.767582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.767602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.775591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166feb58 00:27:59.792 [2024-11-26 07:37:27.776808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.776828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.785193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e6fa8 00:27:59.792 [2024-11-26 07:37:27.786540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.786559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.793080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e23b8 00:27:59.792 [2024-11-26 07:37:27.793758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.793777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.802689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f81e0 00:27:59.792 [2024-11-26 07:37:27.803487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.803506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.813640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f81e0 00:27:59.792 [2024-11-26 07:37:27.814895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.814914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.823236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166f46d0 00:27:59.792 [2024-11-26 07:37:27.824611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.824630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.831195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e27f0 00:27:59.792 [2024-11-26 07:37:27.832096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.832115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:59.792 [2024-11-26 07:37:27.840356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2640) with pdu=0x2000166e27f0 00:27:59.792 [2024-11-26 07:37:27.842149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.792 [2024-11-26 07:37:27.842170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:59.792 27645.50 IOPS, 107.99 MiB/s 00:27:59.792 Latency(us) 00:27:59.792 [2024-11-26T06:37:27.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.793 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:59.793 nvme0n1 : 2.00 27665.25 108.07 0.00 0.00 4621.51 2265.27 12708.29 00:27:59.793 [2024-11-26T06:37:27.893Z] =================================================================================================================== 00:27:59.793 [2024-11-26T06:37:27.893Z] Total : 27665.25 108.07 0.00 0.00 4621.51 2265.27 12708.29 00:27:59.793 { 00:27:59.793 "results": [ 00:27:59.793 { 00:27:59.793 "job": "nvme0n1", 00:27:59.793 "core_mask": "0x2", 00:27:59.793 "workload": "randwrite", 00:27:59.793 "status": "finished", 00:27:59.793 "queue_depth": 128, 00:27:59.793 "io_size": 4096, 00:27:59.793 "runtime": 2.003199, 00:27:59.793 "iops": 27665.249433531066, 00:27:59.793 "mibps": 108.06738059973073, 00:27:59.793 "io_failed": 0, 00:27:59.793 "io_timeout": 0, 00:27:59.793 "avg_latency_us": 4621.513178104825, 00:27:59.793 "min_latency_us": 2265.2660869565216, 00:27:59.793 "max_latency_us": 12708.285217391305 00:27:59.793 } 00:27:59.793 ], 00:27:59.793 "core_count": 1 00:27:59.793 } 00:27:59.793 07:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:59.793 07:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:59.793 07:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:59.793 | .driver_specific 00:27:59.793 | .nvme_error 00:27:59.793 | .status_code 00:27:59.793 | .command_transient_transport_error' 00:27:59.793 07:37:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 884868 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 884868 ']' 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 884868 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 884868 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 884868' 00:28:00.052 killing process with pid 884868 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 884868 00:28:00.052 Received shutdown signal, test time was about 2.000000 seconds 00:28:00.052 00:28:00.052 Latency(us) 00:28:00.052 [2024-11-26T06:37:28.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.052 [2024-11-26T06:37:28.152Z] =================================================================================================================== 00:28:00.052 [2024-11-26T06:37:28.152Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.052 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 884868 00:28:00.311 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:00.311 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:00.311 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:00.311 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:00.311 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:00.311 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=885339 00:28:00.311 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 885339 /var/tmp/bperf.sock 00:28:00.311 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 885339 ']' 00:28:00.311 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:00.311 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.312 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:00.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:00.312 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.312 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:00.312 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.312 [2024-11-26 07:37:28.307607] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:28:00.312 [2024-11-26 07:37:28.307654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885339 ] 00:28:00.312 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:00.312 Zero copy mechanism will not be used. 00:28:00.312 [2024-11-26 07:37:28.370550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.571 [2024-11-26 07:37:28.414225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.571 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.571 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:00.571 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:00.571 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:00.830 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:00.830 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.830 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.830 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.830 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.830 07:37:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.089 nvme0n1 00:28:01.089 07:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:01.089 07:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.089 07:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.089 07:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.089 07:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:01.089 07:37:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:01.089 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:01.089 Zero copy mechanism will not be used. 00:28:01.089 Running I/O for 2 seconds... 00:28:01.089 [2024-11-26 07:37:29.152958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.089 [2024-11-26 07:37:29.153054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.089 [2024-11-26 07:37:29.153090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.089 [2024-11-26 07:37:29.158743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.089 [2024-11-26 07:37:29.158891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.089 [2024-11-26 07:37:29.158919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.089 [2024-11-26 07:37:29.165065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.089 [2024-11-26 07:37:29.165222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.089 [2024-11-26 07:37:29.165245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.089 [2024-11-26 07:37:29.171570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.089 [2024-11-26 07:37:29.171723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.089 [2024-11-26 07:37:29.171742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.089 [2024-11-26 07:37:29.177943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.089 [2024-11-26 07:37:29.178111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.089 [2024-11-26 07:37:29.178132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.184268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.349 [2024-11-26 07:37:29.184438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.349 [2024-11-26 07:37:29.184458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.190666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.349 [2024-11-26 07:37:29.190831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.349 [2024-11-26 07:37:29.190850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.197001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.349 [2024-11-26 07:37:29.197155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.349 [2024-11-26 07:37:29.197174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.203310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.349 [2024-11-26 07:37:29.203453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.349 [2024-11-26 07:37:29.203473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.209874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.349 [2024-11-26 07:37:29.210037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.349 [2024-11-26 07:37:29.210057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.216337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.349 [2024-11-26 07:37:29.216480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.349 [2024-11-26 07:37:29.216499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.222912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.349 [2024-11-26 07:37:29.223058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.349 [2024-11-26 07:37:29.223078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.229170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.349 [2024-11-26 07:37:29.229328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.349 [2024-11-26 07:37:29.229348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.235570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.349 [2024-11-26 07:37:29.235750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.349 [2024-11-26 07:37:29.235769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.241661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.349 [2024-11-26 07:37:29.241849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.349 [2024-11-26 07:37:29.241868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.248911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.349 [2024-11-26 07:37:29.249071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.349 [2024-11-26 07:37:29.249091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.349 [2024-11-26 07:37:29.255400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.255491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.255511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.261185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.261277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.261297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.267988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.268044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.268063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.273176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.273233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.273252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.278569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.278628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.278647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.283983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.284061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.284081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.289041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.289113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.289133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.294110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.294195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.294214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.299249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.299336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.299355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.304876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.304971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.304991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.309994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.310068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.310091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.314966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.315041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.315061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.320295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.320355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.320374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.326224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.326357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.326377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.331833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.331887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.331906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.337217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.337275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.337294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.342231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.342300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.342319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.347399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.347492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.347511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.352380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.352442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.352461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.357985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.358042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.358069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.363061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.363125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.363144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.368229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.368283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.368302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.373236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.373321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.373340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.378702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.378832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.378851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.383928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.384011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.384030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.388639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.388719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.388739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.393209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.393278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.350 [2024-11-26 07:37:29.393296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.350 [2024-11-26 07:37:29.397528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.350 [2024-11-26 07:37:29.397632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.351 [2024-11-26 07:37:29.397651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.351 [2024-11-26 07:37:29.402208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.351 [2024-11-26 07:37:29.402290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.351 [2024-11-26 07:37:29.402309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.351 [2024-11-26 07:37:29.406872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.351 [2024-11-26 07:37:29.406926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.351 [2024-11-26 07:37:29.406945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.351 [2024-11-26 07:37:29.411667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.351 [2024-11-26 07:37:29.411745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.351 [2024-11-26 07:37:29.411765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.351 [2024-11-26 07:37:29.416354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.351 [2024-11-26 07:37:29.416443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.351 [2024-11-26 07:37:29.416462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.351 [2024-11-26 07:37:29.421107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.351 [2024-11-26 07:37:29.421182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.351 [2024-11-26 07:37:29.421201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.351 [2024-11-26 07:37:29.425542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.351 [2024-11-26 07:37:29.425609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.351 [2024-11-26 07:37:29.425629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.351 [2024-11-26 07:37:29.429980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.351 [2024-11-26 07:37:29.430037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.351 [2024-11-26 07:37:29.430056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.351 [2024-11-26 07:37:29.434734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.351 [2024-11-26 07:37:29.434796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.351 [2024-11-26 07:37:29.434813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.351 [2024-11-26 07:37:29.439821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.351 [2024-11-26 07:37:29.439919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.351 [2024-11-26 07:37:29.439939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.611 [2024-11-26 07:37:29.445476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.611 [2024-11-26 07:37:29.445534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.611 [2024-11-26 07:37:29.445552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.611 [2024-11-26 07:37:29.450286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.611 [2024-11-26 07:37:29.450353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.611 [2024-11-26 07:37:29.450372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.611 [2024-11-26 07:37:29.455101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.611 [2024-11-26 07:37:29.455209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.611 [2024-11-26 07:37:29.455229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.611 [2024-11-26 07:37:29.459638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.611 [2024-11-26 07:37:29.459699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.611 [2024-11-26 07:37:29.459718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.611 [2024-11-26 07:37:29.464377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.611 [2024-11-26 07:37:29.464432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.611 [2024-11-26 07:37:29.464451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.611 [2024-11-26 07:37:29.469051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.611 [2024-11-26 07:37:29.469120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.611 [2024-11-26 07:37:29.469140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.611 [2024-11-26 07:37:29.473710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.611 [2024-11-26 07:37:29.473816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.611 [2024-11-26 07:37:29.473834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.611 [2024-11-26 07:37:29.478434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.611 [2024-11-26 07:37:29.478504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.611 [2024-11-26 07:37:29.478523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.611 [2024-11-26 07:37:29.483122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.611 [2024-11-26 07:37:29.483235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.611 [2024-11-26 07:37:29.483257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.611 [2024-11-26 07:37:29.487792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.611 [2024-11-26 07:37:29.487856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.611 [2024-11-26 07:37:29.487875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.611 [2024-11-26 07:37:29.492523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.611 [2024-11-26 07:37:29.492576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.611 [2024-11-26 07:37:29.492595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.497077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.497152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.497171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.501416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.501474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.501493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.506084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.506155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.506173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.510925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.511027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.511046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.516688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.516741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.516760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.522139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.522200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.522218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.526983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.527061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.527081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.531782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.531838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.531857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.536580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.536652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.536671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.541103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.541168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.541187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.545703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.545810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.545829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.550335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.550397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.550416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.555109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.555179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.555198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.560360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.560438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.560457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.565539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.565612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.565632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.570744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.570817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.570836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.576131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.576214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.576233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.580931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.581010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.581028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.585719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.585773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.585792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.591021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.591082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.591101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.596089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.596171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.596190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.600905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.600972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.600991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.605626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.605715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.605734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.610836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.610934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.610964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.615964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.616059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.616078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.620888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.620981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.621000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.625490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.612 [2024-11-26 07:37:29.625555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.612 [2024-11-26 07:37:29.625573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.612 [2024-11-26 07:37:29.629816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.629878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.629897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.634153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.634255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.634274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.638397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.638472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.638491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.642663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.642764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.642784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.646934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.647033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.647053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.651200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.651283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.651303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.655428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.655492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.655511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.659680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.659737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.659757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.663976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.664043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.664063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.668220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.668329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.668349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.672579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.672650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.672669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.676779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.676845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.676865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.681022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.681098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.681117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.685222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.685294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.685313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.689410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.689476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.689495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.693601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.693665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.693684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.697792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.697863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.697883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.613 [2024-11-26 07:37:29.702094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.613 [2024-11-26 07:37:29.702171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.613 [2024-11-26 07:37:29.702190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.873 [2024-11-26 07:37:29.706410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.873 [2024-11-26 07:37:29.706480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.873 [2024-11-26 07:37:29.706499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.873 [2024-11-26 07:37:29.710717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.873 [2024-11-26 07:37:29.710803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.873 [2024-11-26 07:37:29.710822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.873 [2024-11-26 07:37:29.714932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.873 [2024-11-26 07:37:29.715020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.873 [2024-11-26 07:37:29.715040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.873 [2024-11-26 07:37:29.719486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.873 [2024-11-26 07:37:29.719562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.873 [2024-11-26 07:37:29.719581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.873 [2024-11-26 07:37:29.724432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.873 [2024-11-26 07:37:29.724488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.873 [2024-11-26 07:37:29.724511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.873 [2024-11-26 07:37:29.728688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.873 [2024-11-26 07:37:29.728761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.873 [2024-11-26 07:37:29.728779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.873 [2024-11-26 07:37:29.732929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.733000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.733019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.737187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.737246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.737266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.741434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.741498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.741517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.745621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.745696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.745715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.749855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.749917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.749937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.754066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.754127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.754146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.758314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.758382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.758401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.762557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.762629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.762648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.766740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.766798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.766818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.771003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.771063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.771081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.775672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.775732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.775751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.780053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.780117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.780136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.784754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.784855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.784874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.789382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.789490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.789509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.794072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.794134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.794153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.798685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.798757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.798776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.804262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.804424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.804444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.810832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.810969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.810989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.817322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.817387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.817406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.824033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.824171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.824190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.831320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.831470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.831489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.838533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.838598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.838617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.845570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.845732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.845752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.852550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.852697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.852716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.859377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.859523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.859547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.866431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.866709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.866731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.873338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.874 [2024-11-26 07:37:29.873643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.874 [2024-11-26 07:37:29.873664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.874 [2024-11-26 07:37:29.880053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.880342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.880363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.886709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.887034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.887055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.893127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.893354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.893376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.898297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.898517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.898537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.903106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.903325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.903345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.907688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.907917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.907937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.912354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.912572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.912591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.917080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.917298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.917319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.921517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.921745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.921767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.926569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.926788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.926808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.931480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.931713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.931734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.936491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.936702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.936723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.941337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.941557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.941577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.945772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.946008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.946028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.950213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.950442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.950463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.955131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.955350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.955371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.961009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.961270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.961290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:01.875 [2024-11-26 07:37:29.965799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:01.875 [2024-11-26 07:37:29.966027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.875 [2024-11-26 07:37:29.966048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.135 [2024-11-26 07:37:29.970216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.135 [2024-11-26 07:37:29.970444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.135 [2024-11-26 07:37:29.970464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.135 [2024-11-26 07:37:29.975002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.135 [2024-11-26 07:37:29.975223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.135 [2024-11-26 07:37:29.975242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.135 [2024-11-26 07:37:29.979193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.135 [2024-11-26 07:37:29.979414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.135 [2024-11-26 07:37:29.979434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.135 [2024-11-26 07:37:29.983232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.135 [2024-11-26 07:37:29.983456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:29.983477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:29.987271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:29.987486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:29.987507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:29.991327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:29.991541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:29.991566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:29.995285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:29.995501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:29.995522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:29.999273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:29.999511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:29.999531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.003315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.003561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.003587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.007472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.007677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.007709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.011333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.011546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.011568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.015190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.015398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.015419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.019158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.019373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.019395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.023417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.023623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.023648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.028827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.028982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.029008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.033682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.033867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.033888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.038451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.038639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.038660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.043124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.043297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.043316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.047896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.048048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.048067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.052845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.053027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.053047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.057427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.057643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.057670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.063003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.063177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.063199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.067459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.067640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.067661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.072774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.072966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.072987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.077290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.077470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.077489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.082046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.082261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.082282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.086978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.087164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.087183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.091777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.091944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.091970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.096673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.096861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.096881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.101347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.101529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.136 [2024-11-26 07:37:30.101549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.136 [2024-11-26 07:37:30.106016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.136 [2024-11-26 07:37:30.106148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.106168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.110835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.111022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.111041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.115317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.115489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.115510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.120174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.120373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.120392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.125044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.125210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.125231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.129142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.129315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.129335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.133060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.133252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.133273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.136910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.137107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.137127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.140833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.141035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.141054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.144880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.145078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.145099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.148777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.148977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.149004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.137 6203.00 IOPS, 775.38 MiB/s [2024-11-26T06:37:30.237Z] [2024-11-26 07:37:30.153777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.153981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.154002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.157620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.157807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.157828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.161535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.161731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.161752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.165397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.165596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.165617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.169263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.169453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.169474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.173138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.173325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.173345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.176999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.177184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.177203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.180954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.181149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.181168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.185184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.185362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.185381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.189965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.190162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.190182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.194932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.195117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.195136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.199970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.200147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.200168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.205071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.205280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.205301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.209786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.209977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.209998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.214321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.214509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.214530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.219296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.137 [2024-11-26 07:37:30.219439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.137 [2024-11-26 07:37:30.219458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.137 [2024-11-26 07:37:30.224234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.138 [2024-11-26 07:37:30.224425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.138 [2024-11-26 07:37:30.224448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.397 [2024-11-26 07:37:30.229606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.229882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.229903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.236355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.236575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.236594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.242568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.242744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.242763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.247853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.248031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.248050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.253401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.253665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.253686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.260255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.260491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.260513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.266872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.267063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.267082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.273551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.273805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.273825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.279895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.280105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.280131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.286653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.286981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.287002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.292970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.293204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.293225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.300139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.300318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.300337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.306882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.307124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.307146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.313736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.313913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.313933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.319654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.319856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.319876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.324151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.324351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.324371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.328596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.328833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.328854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.333226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.333426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.333445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.337584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.337786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.337807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.341958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.342196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.342217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.346275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.346493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.346514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.350544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.350787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.350808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.354976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.355167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.355188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.358967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.359198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.359218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.363830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.364048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.364066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.369344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.369524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.369542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.374445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.398 [2024-11-26 07:37:30.374746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.398 [2024-11-26 07:37:30.374766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.398 [2024-11-26 07:37:30.380106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.380330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.380350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.386371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.386596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.386617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.392646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.392777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.392795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.399092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.399341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.399362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.404831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.405011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.405030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.409962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.410087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.410105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.414272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.414438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.414458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.419029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.419101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.419123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.423514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.423674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.423693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.427797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.427960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.427980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.432006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.432152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.432171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.436904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.437238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.437258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.441620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.441773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.441791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.445728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.445899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.445918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.449669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.449816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.449835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.454425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.454584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.454604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.458769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.459048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.459068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.464481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.464707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.464727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.470190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.470360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.470378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.476162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.476299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.476318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.482041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.482170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.482189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.399 [2024-11-26 07:37:30.488231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.399 [2024-11-26 07:37:30.488347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.399 [2024-11-26 07:37:30.488365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.494677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.494896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.494915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.500439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.500627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.500646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.506547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.506709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.506728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.512538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.512706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.512726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.518322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.518465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.518484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.522683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.522845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.522863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.527265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.527397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.527415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.531798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.531946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.531971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.536820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.536972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.536991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.541434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.541584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.541602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.546124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.546276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.546295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.550881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.551016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.551039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.555540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.555692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.555710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.560174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.560276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.560296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.660 [2024-11-26 07:37:30.564790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.660 [2024-11-26 07:37:30.564933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.660 [2024-11-26 07:37:30.564956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.569724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.569843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.569862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.574116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.574269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.574288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.578163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.578330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.578350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.582107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.582286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.582306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.585914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.586084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.586102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.589972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.590132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.590151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.594033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.594205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.594224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.598026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.598192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.598212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.602140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.602311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.602329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.605929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.606102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.606120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.610117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.610286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.610306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.615161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.615223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.615242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.619415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.619557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.619576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.623596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.623751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.623770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.627652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.627816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.627834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.631645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.631806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.631825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.635642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.635799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.635817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.640001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.640158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.640176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.644740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.644846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.644865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.649141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.649296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.649316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.653239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.653401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.653421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.657227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.657376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.657395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.661667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.661830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.661852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.665925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.666085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.666104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.670442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.670606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.670625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.674875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.675007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.675027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.661 [2024-11-26 07:37:30.680919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.661 [2024-11-26 07:37:30.681206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.661 [2024-11-26 07:37:30.681227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.686843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.687034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.687052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.693075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.693326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.693346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.699572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.699826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.699846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.706324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.706466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.706485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.713439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.713675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.713695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.720547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.720668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.720688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.727001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.727145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.727164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.731139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.731263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.731282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.735478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.735597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.735616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.739586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.739703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.739721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.743782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.743969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.743988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.748616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.748748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.748768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.662 [2024-11-26 07:37:30.752757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.662 [2024-11-26 07:37:30.752873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.662 [2024-11-26 07:37:30.752892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.756761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.756886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.756904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.760829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.760970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.760989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.764806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.764929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.764952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.768922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.769050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.769068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.773309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.773432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.773450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.778292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.778537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.778557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.782455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.782569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.782588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.786527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.786643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.786661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.790524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.790675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.790697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.794523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.794665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.794684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.798519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.798684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.798703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.802395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.802566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.802587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.806288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.806430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.806449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.810172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.810309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.810328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.814279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.814385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.814404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.818127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.818288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.818308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.821987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.822103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.822123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.825841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.825970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.825989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.829702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.829826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.829845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.833584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.833733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.833752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.837432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.837577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.923 [2024-11-26 07:37:30.837595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.923 [2024-11-26 07:37:30.841287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.923 [2024-11-26 07:37:30.841424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.841444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.845171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.845307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.845326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.849046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.849190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.849209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.852920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.853055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.853074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.856769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.856909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.856928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.860614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.860754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.860773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.864454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.864585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.864604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.868306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.868441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.868460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.872199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.872315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.872333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.876101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.876234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.876253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.879963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.880104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.880123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.883877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.883993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.884011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.887984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.888095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.888114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.892849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.892957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.892979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.897366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.897500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.897518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.901655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.901775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.901794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.905710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.905824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.905843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.909806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.909935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.909958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.913901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.914056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.914075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.918003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.918125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.918143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.922099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.922219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.922237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.926257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.926405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.926424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.930504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.930626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.930648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.934634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.934760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.934779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.938825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.938967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.938986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.943098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.943240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.943258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.947248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.947375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.947394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.924 [2024-11-26 07:37:30.951277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.924 [2024-11-26 07:37:30.951401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.924 [2024-11-26 07:37:30.951420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:30.955393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:30.955496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:30.955515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:30.959398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:30.959523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:30.959541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:30.963441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:30.963567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:30.963586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:30.967614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:30.967744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:30.967762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:30.972438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:30.972572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:30.972590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:30.977201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:30.977348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:30.977367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:30.981351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:30.981480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:30.981498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:30.985446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:30.985584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:30.985602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:30.989472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:30.989601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:30.989619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:30.993470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:30.993596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:30.993615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:30.997427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:30.997576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:30.997594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:31.001336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:31.001474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:31.001492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:31.005184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:31.005343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:31.005363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:31.009026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:31.009166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:31.009185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.925 [2024-11-26 07:37:31.012960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:02.925 [2024-11-26 07:37:31.013100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.925 [2024-11-26 07:37:31.013119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.016978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.017098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.017117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.021082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.021209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.021228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.025781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.025879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.025898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.029970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.030098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.030116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.034071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.034205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.034224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.038313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.038444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.038466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.042382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.042513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.042533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.046595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.046729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.046748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.051429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.051549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.051567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.056684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.056816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.056835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.060848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.060986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.061004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.064970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.065102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.065121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.069078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.185 [2024-11-26 07:37:31.069224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.185 [2024-11-26 07:37:31.069242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.185 [2024-11-26 07:37:31.073081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.073207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.073226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.077145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.077273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.077292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.081209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.081366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.081386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.085203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.085340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.085359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.089341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.089462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.089480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.093485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.093633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.093651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.097562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.097706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.097724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.101594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.101737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.101755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.105653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.105774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.105792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.109760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.109890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.109909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.113671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.113791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.113810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.117757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.117872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.117891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.122657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.122762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.122781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.127031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.127172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.127191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.131133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.131289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.131308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.135263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.135402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.135420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.139189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.139313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.139332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.143151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.143270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.143289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.147146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.147248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.147270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.186 [2024-11-26 07:37:31.151107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba2b20) with pdu=0x2000166ff3c8 00:28:03.186 [2024-11-26 07:37:31.151246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.186 [2024-11-26 07:37:31.151265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.186 6481.00 IOPS, 810.12 MiB/s 00:28:03.186 Latency(us) 00:28:03.186 [2024-11-26T06:37:31.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.186 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:03.186 nvme0n1 : 2.00 6480.45 810.06 0.00 0.00 2464.98 1524.42 9232.03 00:28:03.186 [2024-11-26T06:37:31.286Z] =================================================================================================================== 00:28:03.186 [2024-11-26T06:37:31.286Z] Total : 6480.45 810.06 0.00 0.00 2464.98 1524.42 9232.03 00:28:03.186 { 00:28:03.186 "results": [ 00:28:03.186 { 00:28:03.186 "job": "nvme0n1", 00:28:03.186 "core_mask": "0x2", 00:28:03.186 "workload": "randwrite", 00:28:03.186 "status": "finished", 00:28:03.186 "queue_depth": 16, 00:28:03.186 "io_size": 131072, 00:28:03.186 "runtime": 2.003255, 00:28:03.186 "iops": 6480.453062640552, 00:28:03.186 "mibps": 810.056632830069, 00:28:03.186 "io_failed": 0, 00:28:03.186 "io_timeout": 0, 00:28:03.186 "avg_latency_us": 2464.980827768214, 00:28:03.186 "min_latency_us": 1524.424347826087, 00:28:03.186 "max_latency_us": 9232.027826086956 00:28:03.186 } 00:28:03.186 ], 00:28:03.186 "core_count": 1 00:28:03.186 } 00:28:03.186 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:03.186 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:03.186 | .driver_specific 00:28:03.186 | .nvme_error 00:28:03.186 | .status_code 00:28:03.186 | .command_transient_transport_error' 00:28:03.186 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:03.186 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 419 > 0 )) 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 885339 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 885339 ']' 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 885339 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885339 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885339' 00:28:03.446 killing process with pid 885339 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 885339 00:28:03.446 Received shutdown signal, test time was about 2.000000 seconds 00:28:03.446 00:28:03.446 Latency(us) 00:28:03.446 [2024-11-26T06:37:31.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.446 [2024-11-26T06:37:31.546Z] =================================================================================================================== 00:28:03.446 [2024-11-26T06:37:31.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:03.446 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 885339 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 883679 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 883679 ']' 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 883679 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 883679 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 883679' 00:28:03.705 killing process with pid 883679 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 883679 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 883679 00:28:03.705 00:28:03.705 real 0m13.769s 00:28:03.705 user 0m26.328s 00:28:03.705 sys 0m4.440s 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.705 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:03.705 ************************************ 00:28:03.705 END TEST nvmf_digest_error 00:28:03.705 ************************************ 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:03.965 rmmod nvme_tcp 00:28:03.965 rmmod nvme_fabrics 00:28:03.965 rmmod nvme_keyring 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 883679 ']' 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 883679 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 883679 ']' 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 883679 00:28:03.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (883679) - No such process 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 883679 is not found' 00:28:03.965 Process with pid 883679 is not found 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.965 07:37:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.872 07:37:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:05.872 00:28:05.872 real 0m35.543s 00:28:05.872 user 0m53.986s 00:28:05.872 sys 0m13.394s 00:28:05.872 07:37:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:05.872 07:37:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.872 ************************************ 00:28:05.872 END TEST nvmf_digest 00:28:05.872 ************************************ 00:28:05.872 07:37:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:05.872 07:37:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:05.872 07:37:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:05.872 07:37:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:05.872 07:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:05.872 07:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.132 07:37:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.132 ************************************ 00:28:06.132 START TEST nvmf_bdevperf 00:28:06.132 ************************************ 00:28:06.132 07:37:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:06.133 * Looking for test storage... 00:28:06.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:06.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.133 --rc genhtml_branch_coverage=1 00:28:06.133 --rc genhtml_function_coverage=1 00:28:06.133 --rc genhtml_legend=1 00:28:06.133 --rc geninfo_all_blocks=1 00:28:06.133 --rc geninfo_unexecuted_blocks=1 00:28:06.133 00:28:06.133 ' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:06.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.133 --rc genhtml_branch_coverage=1 00:28:06.133 --rc genhtml_function_coverage=1 00:28:06.133 --rc genhtml_legend=1 00:28:06.133 --rc geninfo_all_blocks=1 00:28:06.133 --rc geninfo_unexecuted_blocks=1 00:28:06.133 00:28:06.133 ' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:06.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.133 --rc genhtml_branch_coverage=1 00:28:06.133 --rc genhtml_function_coverage=1 00:28:06.133 --rc genhtml_legend=1 00:28:06.133 --rc geninfo_all_blocks=1 00:28:06.133 --rc geninfo_unexecuted_blocks=1 00:28:06.133 00:28:06.133 ' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:06.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.133 --rc genhtml_branch_coverage=1 00:28:06.133 --rc genhtml_function_coverage=1 00:28:06.133 --rc genhtml_legend=1 00:28:06.133 --rc geninfo_all_blocks=1 00:28:06.133 --rc geninfo_unexecuted_blocks=1 00:28:06.133 00:28:06.133 ' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:06.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:06.133 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:06.134 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:06.134 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.134 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.134 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.134 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:06.134 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:06.134 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.134 07:37:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.403 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:11.404 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:11.404 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:11.404 Found net devices under 0000:86:00.0: cvl_0_0 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:11.404 Found net devices under 0000:86:00.1: cvl_0_1 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:11.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:28:11.404 00:28:11.404 --- 10.0.0.2 ping statistics --- 00:28:11.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.404 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:28:11.404 00:28:11.404 --- 10.0.0.1 ping statistics --- 00:28:11.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.404 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=889341 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 889341 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 889341 ']' 00:28:11.404 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.405 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.405 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.405 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:11.405 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.405 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.405 [2024-11-26 07:37:39.329203] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:28:11.405 [2024-11-26 07:37:39.329253] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.405 [2024-11-26 07:37:39.395756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:11.405 [2024-11-26 07:37:39.438438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.405 [2024-11-26 07:37:39.438475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.405 [2024-11-26 07:37:39.438482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.405 [2024-11-26 07:37:39.438489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.405 [2024-11-26 07:37:39.438494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.405 [2024-11-26 07:37:39.439944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.405 [2024-11-26 07:37:39.440022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:11.405 [2024-11-26 07:37:39.440024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.664 [2024-11-26 07:37:39.576887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.664 Malloc0 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.664 [2024-11-26 07:37:39.633295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.664 { 00:28:11.664 "params": { 00:28:11.664 "name": "Nvme$subsystem", 00:28:11.664 "trtype": "$TEST_TRANSPORT", 00:28:11.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.664 "adrfam": "ipv4", 00:28:11.664 "trsvcid": "$NVMF_PORT", 00:28:11.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.664 "hdgst": ${hdgst:-false}, 00:28:11.664 "ddgst": ${ddgst:-false} 00:28:11.664 }, 00:28:11.664 "method": "bdev_nvme_attach_controller" 00:28:11.664 } 00:28:11.664 EOF 00:28:11.664 )") 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:11.664 07:37:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:11.664 "params": { 00:28:11.664 "name": "Nvme1", 00:28:11.664 "trtype": "tcp", 00:28:11.664 "traddr": "10.0.0.2", 00:28:11.664 "adrfam": "ipv4", 00:28:11.664 "trsvcid": "4420", 00:28:11.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:11.664 "hdgst": false, 00:28:11.664 "ddgst": false 00:28:11.664 }, 00:28:11.664 "method": "bdev_nvme_attach_controller" 00:28:11.664 }' 00:28:11.664 [2024-11-26 07:37:39.687143] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:28:11.664 [2024-11-26 07:37:39.687186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889368 ] 00:28:11.664 [2024-11-26 07:37:39.749564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.924 [2024-11-26 07:37:39.791577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.183 Running I/O for 1 seconds... 00:28:13.121 10836.00 IOPS, 42.33 MiB/s 00:28:13.121 Latency(us) 00:28:13.121 [2024-11-26T06:37:41.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.121 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:13.121 Verification LBA range: start 0x0 length 0x4000 00:28:13.121 Nvme1n1 : 1.05 10476.39 40.92 0.00 0.00 11696.17 2493.22 41943.04 00:28:13.121 [2024-11-26T06:37:41.221Z] =================================================================================================================== 00:28:13.121 [2024-11-26T06:37:41.221Z] Total : 10476.39 40.92 0.00 0.00 11696.17 2493.22 41943.04 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=889610 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:13.380 { 00:28:13.380 "params": { 00:28:13.380 "name": "Nvme$subsystem", 00:28:13.380 "trtype": "$TEST_TRANSPORT", 00:28:13.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.380 "adrfam": "ipv4", 00:28:13.380 "trsvcid": "$NVMF_PORT", 00:28:13.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.380 "hdgst": ${hdgst:-false}, 00:28:13.380 "ddgst": ${ddgst:-false} 00:28:13.380 }, 00:28:13.380 "method": "bdev_nvme_attach_controller" 00:28:13.380 } 00:28:13.380 EOF 00:28:13.380 )") 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:13.380 07:37:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:13.380 "params": { 00:28:13.380 "name": "Nvme1", 00:28:13.380 "trtype": "tcp", 00:28:13.380 "traddr": "10.0.0.2", 00:28:13.380 "adrfam": "ipv4", 00:28:13.380 "trsvcid": "4420", 00:28:13.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:13.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:13.380 "hdgst": false, 00:28:13.380 "ddgst": false 00:28:13.380 }, 00:28:13.380 "method": "bdev_nvme_attach_controller" 00:28:13.380 }' 00:28:13.380 [2024-11-26 07:37:41.361371] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:28:13.380 [2024-11-26 07:37:41.361422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889610 ] 00:28:13.380 [2024-11-26 07:37:41.425952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.380 [2024-11-26 07:37:41.464526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.948 Running I/O for 15 seconds... 00:28:15.821 11003.00 IOPS, 42.98 MiB/s [2024-11-26T06:37:44.491Z] 11025.50 IOPS, 43.07 MiB/s [2024-11-26T06:37:44.491Z] 07:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 889341 00:28:16.391 07:37:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:16.391 [2024-11-26 07:37:44.331686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.331986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.391 [2024-11-26 07:37:44.331994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.332006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.391 [2024-11-26 07:37:44.332014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.332026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.391 [2024-11-26 07:37:44.332035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.332044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.391 [2024-11-26 07:37:44.332051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.391 [2024-11-26 07:37:44.332060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.391 [2024-11-26 07:37:44.332068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.392 [2024-11-26 07:37:44.332384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.392 [2024-11-26 07:37:44.332677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.392 [2024-11-26 07:37:44.332683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.332935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.332942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.393 [2024-11-26 07:37:44.333409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.393 [2024-11-26 07:37:44.333415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.394 [2024-11-26 07:37:44.333477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.394 [2024-11-26 07:37:44.333821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.333828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa9970 is same with the state(6) to be set 00:28:16.394 [2024-11-26 07:37:44.333838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:16.394 [2024-11-26 07:37:44.333843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:16.394 [2024-11-26 07:37:44.333849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92672 len:8 PRP1 0x0 PRP2 0x0 00:28:16.394 [2024-11-26 07:37:44.333857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.394 [2024-11-26 07:37:44.336822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.394 [2024-11-26 07:37:44.336875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.394 [2024-11-26 07:37:44.337434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.394 [2024-11-26 07:37:44.337479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.394 [2024-11-26 07:37:44.337504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.394 [2024-11-26 07:37:44.337975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.394 [2024-11-26 07:37:44.338153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.394 [2024-11-26 07:37:44.338163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.394 [2024-11-26 07:37:44.338171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.394 [2024-11-26 07:37:44.338179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.394 [2024-11-26 07:37:44.350194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.394 [2024-11-26 07:37:44.350511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.394 [2024-11-26 07:37:44.350558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.394 [2024-11-26 07:37:44.350583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.394 [2024-11-26 07:37:44.351174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.394 [2024-11-26 07:37:44.351757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.394 [2024-11-26 07:37:44.351769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.394 [2024-11-26 07:37:44.351778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.394 [2024-11-26 07:37:44.351786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.394 [2024-11-26 07:37:44.363275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.394 [2024-11-26 07:37:44.363581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.394 [2024-11-26 07:37:44.363601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.394 [2024-11-26 07:37:44.363611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.395 [2024-11-26 07:37:44.363793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.395 [2024-11-26 07:37:44.363978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.395 [2024-11-26 07:37:44.363989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.395 [2024-11-26 07:37:44.363997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.395 [2024-11-26 07:37:44.364004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.395 [2024-11-26 07:37:44.376379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.395 [2024-11-26 07:37:44.376748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.395 [2024-11-26 07:37:44.376767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.395 [2024-11-26 07:37:44.376775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.395 [2024-11-26 07:37:44.376960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.395 [2024-11-26 07:37:44.377139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.395 [2024-11-26 07:37:44.377149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.395 [2024-11-26 07:37:44.377156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.395 [2024-11-26 07:37:44.377164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.395 [2024-11-26 07:37:44.389539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.395 [2024-11-26 07:37:44.389968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.395 [2024-11-26 07:37:44.389987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.395 [2024-11-26 07:37:44.389996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.395 [2024-11-26 07:37:44.390179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.395 [2024-11-26 07:37:44.390363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.395 [2024-11-26 07:37:44.390374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.395 [2024-11-26 07:37:44.390381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.395 [2024-11-26 07:37:44.390388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.395 [2024-11-26 07:37:44.402667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.395 [2024-11-26 07:37:44.402973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.395 [2024-11-26 07:37:44.402992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.395 [2024-11-26 07:37:44.403000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.395 [2024-11-26 07:37:44.403177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.395 [2024-11-26 07:37:44.403356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.395 [2024-11-26 07:37:44.403370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.395 [2024-11-26 07:37:44.403377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.395 [2024-11-26 07:37:44.403384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.395 [2024-11-26 07:37:44.415730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.395 [2024-11-26 07:37:44.416172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.395 [2024-11-26 07:37:44.416192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.395 [2024-11-26 07:37:44.416201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.395 [2024-11-26 07:37:44.416378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.395 [2024-11-26 07:37:44.416557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.395 [2024-11-26 07:37:44.416567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.395 [2024-11-26 07:37:44.416575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.395 [2024-11-26 07:37:44.416582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.395 [2024-11-26 07:37:44.429014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.395 [2024-11-26 07:37:44.429458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.395 [2024-11-26 07:37:44.429477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.395 [2024-11-26 07:37:44.429485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.395 [2024-11-26 07:37:44.429668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.395 [2024-11-26 07:37:44.429852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.395 [2024-11-26 07:37:44.429861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.395 [2024-11-26 07:37:44.429869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.395 [2024-11-26 07:37:44.429877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.395 [2024-11-26 07:37:44.442466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.395 [2024-11-26 07:37:44.442860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.395 [2024-11-26 07:37:44.442879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.395 [2024-11-26 07:37:44.442888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.395 [2024-11-26 07:37:44.443090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.395 [2024-11-26 07:37:44.443285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.395 [2024-11-26 07:37:44.443295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.395 [2024-11-26 07:37:44.443303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.395 [2024-11-26 07:37:44.443310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.395 [2024-11-26 07:37:44.455659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.395 [2024-11-26 07:37:44.456128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.395 [2024-11-26 07:37:44.456148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.395 [2024-11-26 07:37:44.456158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.395 [2024-11-26 07:37:44.456354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.395 [2024-11-26 07:37:44.456550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.395 [2024-11-26 07:37:44.456561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.395 [2024-11-26 07:37:44.456569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.395 [2024-11-26 07:37:44.456576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.395 [2024-11-26 07:37:44.468918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.395 [2024-11-26 07:37:44.469299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.395 [2024-11-26 07:37:44.469318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.395 [2024-11-26 07:37:44.469326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.395 [2024-11-26 07:37:44.469509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.395 [2024-11-26 07:37:44.469694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.395 [2024-11-26 07:37:44.469705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.396 [2024-11-26 07:37:44.469713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.396 [2024-11-26 07:37:44.469719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.396 [2024-11-26 07:37:44.482361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.656 [2024-11-26 07:37:44.482810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.656 [2024-11-26 07:37:44.482828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.656 [2024-11-26 07:37:44.482836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.656 [2024-11-26 07:37:44.483024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.656 [2024-11-26 07:37:44.483209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.656 [2024-11-26 07:37:44.483217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.656 [2024-11-26 07:37:44.483225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.656 [2024-11-26 07:37:44.483231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.656 [2024-11-26 07:37:44.495737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.656 [2024-11-26 07:37:44.496060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.656 [2024-11-26 07:37:44.496114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.656 [2024-11-26 07:37:44.496138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.656 [2024-11-26 07:37:44.496716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.656 [2024-11-26 07:37:44.497228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.656 [2024-11-26 07:37:44.497239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.656 [2024-11-26 07:37:44.497246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.656 [2024-11-26 07:37:44.497252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.656 [2024-11-26 07:37:44.508936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.656 [2024-11-26 07:37:44.509392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.656 [2024-11-26 07:37:44.509438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.656 [2024-11-26 07:37:44.509461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.656 [2024-11-26 07:37:44.510053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.656 [2024-11-26 07:37:44.510411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.656 [2024-11-26 07:37:44.510421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.656 [2024-11-26 07:37:44.510428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.656 [2024-11-26 07:37:44.510435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.656 [2024-11-26 07:37:44.522587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.656 [2024-11-26 07:37:44.522931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.656 [2024-11-26 07:37:44.522955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.656 [2024-11-26 07:37:44.522963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.656 [2024-11-26 07:37:44.523131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.656 [2024-11-26 07:37:44.523298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.656 [2024-11-26 07:37:44.523308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.656 [2024-11-26 07:37:44.523314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.656 [2024-11-26 07:37:44.523322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.656 [2024-11-26 07:37:44.535625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.656 [2024-11-26 07:37:44.536005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.656 [2024-11-26 07:37:44.536023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.656 [2024-11-26 07:37:44.536030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.656 [2024-11-26 07:37:44.536196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.656 [2024-11-26 07:37:44.536359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.656 [2024-11-26 07:37:44.536369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.656 [2024-11-26 07:37:44.536376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.656 [2024-11-26 07:37:44.536382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.656 [2024-11-26 07:37:44.548588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.656 [2024-11-26 07:37:44.548913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.656 [2024-11-26 07:37:44.548930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.657 [2024-11-26 07:37:44.548937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.657 [2024-11-26 07:37:44.549104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.657 [2024-11-26 07:37:44.549268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.657 [2024-11-26 07:37:44.549278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.657 [2024-11-26 07:37:44.549284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.657 [2024-11-26 07:37:44.549291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.657 [2024-11-26 07:37:44.561623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.657 [2024-11-26 07:37:44.562005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.657 [2024-11-26 07:37:44.562023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.657 [2024-11-26 07:37:44.562031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.657 [2024-11-26 07:37:44.562194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.657 [2024-11-26 07:37:44.562357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.657 [2024-11-26 07:37:44.562367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.657 [2024-11-26 07:37:44.562373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.657 [2024-11-26 07:37:44.562380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.657 [2024-11-26 07:37:44.574459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.657 [2024-11-26 07:37:44.574792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.657 [2024-11-26 07:37:44.574809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.657 [2024-11-26 07:37:44.574817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.657 [2024-11-26 07:37:44.575001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.657 [2024-11-26 07:37:44.575174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.657 [2024-11-26 07:37:44.575187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.657 [2024-11-26 07:37:44.575194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.657 [2024-11-26 07:37:44.575201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.657 [2024-11-26 07:37:44.587418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.657 [2024-11-26 07:37:44.587806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.657 [2024-11-26 07:37:44.587850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.657 [2024-11-26 07:37:44.587874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.657 [2024-11-26 07:37:44.588378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.657 [2024-11-26 07:37:44.588552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.657 [2024-11-26 07:37:44.588563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.657 [2024-11-26 07:37:44.588570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.657 [2024-11-26 07:37:44.588576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.657 [2024-11-26 07:37:44.600605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.657 [2024-11-26 07:37:44.601018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.657 [2024-11-26 07:37:44.601037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.657 [2024-11-26 07:37:44.601045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.657 [2024-11-26 07:37:44.601222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.657 [2024-11-26 07:37:44.601401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.657 [2024-11-26 07:37:44.601411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.657 [2024-11-26 07:37:44.601420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.657 [2024-11-26 07:37:44.601428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.657 [2024-11-26 07:37:44.613575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.657 [2024-11-26 07:37:44.613999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.657 [2024-11-26 07:37:44.614018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.657 [2024-11-26 07:37:44.614027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.657 [2024-11-26 07:37:44.614213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.657 [2024-11-26 07:37:44.614386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.657 [2024-11-26 07:37:44.614397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.657 [2024-11-26 07:37:44.614405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.657 [2024-11-26 07:37:44.614411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.657 [2024-11-26 07:37:44.626425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.657 [2024-11-26 07:37:44.626831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.657 [2024-11-26 07:37:44.626848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.657 [2024-11-26 07:37:44.626857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.657 [2024-11-26 07:37:44.627032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.657 [2024-11-26 07:37:44.627216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.657 [2024-11-26 07:37:44.627226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.657 [2024-11-26 07:37:44.627232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.657 [2024-11-26 07:37:44.627239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.657 [2024-11-26 07:37:44.639393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.657 [2024-11-26 07:37:44.639801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.657 [2024-11-26 07:37:44.639846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.657 [2024-11-26 07:37:44.639869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.657 [2024-11-26 07:37:44.640295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.657 [2024-11-26 07:37:44.640470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.657 [2024-11-26 07:37:44.640480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.657 [2024-11-26 07:37:44.640487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.657 [2024-11-26 07:37:44.640493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.657 [2024-11-26 07:37:44.652326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.657 [2024-11-26 07:37:44.652767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.657 [2024-11-26 07:37:44.652806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.657 [2024-11-26 07:37:44.652832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.657 [2024-11-26 07:37:44.653368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.657 [2024-11-26 07:37:44.653541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.657 [2024-11-26 07:37:44.653551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.657 [2024-11-26 07:37:44.653557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.657 [2024-11-26 07:37:44.653564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.657 [2024-11-26 07:37:44.665417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.657 [2024-11-26 07:37:44.665763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.657 [2024-11-26 07:37:44.665783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.657 [2024-11-26 07:37:44.665791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.657 [2024-11-26 07:37:44.665958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.657 [2024-11-26 07:37:44.666147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.657 [2024-11-26 07:37:44.666156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.657 [2024-11-26 07:37:44.666163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.657 [2024-11-26 07:37:44.666170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.657 [2024-11-26 07:37:44.678264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.657 [2024-11-26 07:37:44.678686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.657 [2024-11-26 07:37:44.678703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.658 [2024-11-26 07:37:44.678711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.658 [2024-11-26 07:37:44.678874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.658 [2024-11-26 07:37:44.679063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.658 [2024-11-26 07:37:44.679073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.658 [2024-11-26 07:37:44.679080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.658 [2024-11-26 07:37:44.679086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.658 [2024-11-26 07:37:44.691193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.658 [2024-11-26 07:37:44.691616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.658 [2024-11-26 07:37:44.691660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.658 [2024-11-26 07:37:44.691683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.658 [2024-11-26 07:37:44.692276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.658 [2024-11-26 07:37:44.692532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.658 [2024-11-26 07:37:44.692545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.658 [2024-11-26 07:37:44.692555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.658 [2024-11-26 07:37:44.692565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.658 [2024-11-26 07:37:44.704988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.658 [2024-11-26 07:37:44.705414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.658 [2024-11-26 07:37:44.705431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.658 [2024-11-26 07:37:44.705439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.658 [2024-11-26 07:37:44.705609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.658 [2024-11-26 07:37:44.705776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.658 [2024-11-26 07:37:44.705786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.658 [2024-11-26 07:37:44.705792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.658 [2024-11-26 07:37:44.705798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.658 [2024-11-26 07:37:44.717908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.658 [2024-11-26 07:37:44.718343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.658 [2024-11-26 07:37:44.718388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.658 [2024-11-26 07:37:44.718411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.658 [2024-11-26 07:37:44.718844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.658 [2024-11-26 07:37:44.719032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.658 [2024-11-26 07:37:44.719042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.658 [2024-11-26 07:37:44.719049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.658 [2024-11-26 07:37:44.719056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.658 [2024-11-26 07:37:44.730690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.658 [2024-11-26 07:37:44.730963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.658 [2024-11-26 07:37:44.730980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.658 [2024-11-26 07:37:44.730989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.658 [2024-11-26 07:37:44.731153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.658 [2024-11-26 07:37:44.731315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.658 [2024-11-26 07:37:44.731325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.658 [2024-11-26 07:37:44.731332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.658 [2024-11-26 07:37:44.731338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.658 [2024-11-26 07:37:44.743620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.658 [2024-11-26 07:37:44.743997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.658 [2024-11-26 07:37:44.744030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.658 [2024-11-26 07:37:44.744039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.658 [2024-11-26 07:37:44.744211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.658 [2024-11-26 07:37:44.744383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.658 [2024-11-26 07:37:44.744393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.658 [2024-11-26 07:37:44.744403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.658 [2024-11-26 07:37:44.744410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.921 [2024-11-26 07:37:44.756822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.921 [2024-11-26 07:37:44.757264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.921 [2024-11-26 07:37:44.757282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.921 [2024-11-26 07:37:44.757291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.921 [2024-11-26 07:37:44.757468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.921 [2024-11-26 07:37:44.757646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.921 [2024-11-26 07:37:44.757655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.921 [2024-11-26 07:37:44.757662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.921 [2024-11-26 07:37:44.757669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.921 [2024-11-26 07:37:44.769794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.921 [2024-11-26 07:37:44.770221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.921 [2024-11-26 07:37:44.770267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.921 [2024-11-26 07:37:44.770291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.921 [2024-11-26 07:37:44.770781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.921 [2024-11-26 07:37:44.770959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.921 [2024-11-26 07:37:44.770969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.921 [2024-11-26 07:37:44.770976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.921 [2024-11-26 07:37:44.770983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.921 [2024-11-26 07:37:44.782654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.921 [2024-11-26 07:37:44.783057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.921 [2024-11-26 07:37:44.783075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.921 [2024-11-26 07:37:44.783082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.921 [2024-11-26 07:37:44.783245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.921 [2024-11-26 07:37:44.783408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.921 [2024-11-26 07:37:44.783418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.921 [2024-11-26 07:37:44.783424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.921 [2024-11-26 07:37:44.783431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.921 [2024-11-26 07:37:44.795556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.921 [2024-11-26 07:37:44.795984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.921 [2024-11-26 07:37:44.796031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.921 [2024-11-26 07:37:44.796054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.921 [2024-11-26 07:37:44.796631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.921 [2024-11-26 07:37:44.797086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.921 [2024-11-26 07:37:44.797097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.921 [2024-11-26 07:37:44.797104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.921 [2024-11-26 07:37:44.797111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.921 9288.67 IOPS, 36.28 MiB/s [2024-11-26T06:37:45.021Z] [2024-11-26 07:37:44.808436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.921 [2024-11-26 07:37:44.808786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.921 [2024-11-26 07:37:44.808804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.921 [2024-11-26 07:37:44.808812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.921 [2024-11-26 07:37:44.808979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.921 [2024-11-26 07:37:44.809167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.921 [2024-11-26 07:37:44.809176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.921 [2024-11-26 07:37:44.809183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.921 [2024-11-26 07:37:44.809190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.921 [2024-11-26 07:37:44.821286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.921 [2024-11-26 07:37:44.821685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.921 [2024-11-26 07:37:44.821703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.921 [2024-11-26 07:37:44.821710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.921 [2024-11-26 07:37:44.821873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.921 [2024-11-26 07:37:44.822059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.921 [2024-11-26 07:37:44.822070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.921 [2024-11-26 07:37:44.822077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.921 [2024-11-26 07:37:44.822083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.921 [2024-11-26 07:37:44.834184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.921 [2024-11-26 07:37:44.834620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.921 [2024-11-26 07:37:44.834673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.921 [2024-11-26 07:37:44.834697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.921 [2024-11-26 07:37:44.835291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.921 [2024-11-26 07:37:44.835875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.922 [2024-11-26 07:37:44.835901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.922 [2024-11-26 07:37:44.835923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.922 [2024-11-26 07:37:44.835942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.922 [2024-11-26 07:37:44.847002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.922 [2024-11-26 07:37:44.847439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.922 [2024-11-26 07:37:44.847483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.922 [2024-11-26 07:37:44.847507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.922 [2024-11-26 07:37:44.848057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.922 [2024-11-26 07:37:44.848232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.922 [2024-11-26 07:37:44.848242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.922 [2024-11-26 07:37:44.848248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.922 [2024-11-26 07:37:44.848255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.922 [2024-11-26 07:37:44.860172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.922 [2024-11-26 07:37:44.860602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.922 [2024-11-26 07:37:44.860620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.922 [2024-11-26 07:37:44.860628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.922 [2024-11-26 07:37:44.860801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.922 [2024-11-26 07:37:44.860979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.922 [2024-11-26 07:37:44.860989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.922 [2024-11-26 07:37:44.860996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.922 [2024-11-26 07:37:44.861003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.922 [2024-11-26 07:37:44.873034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.922 [2024-11-26 07:37:44.873462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.922 [2024-11-26 07:37:44.873480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.922 [2024-11-26 07:37:44.873488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.922 [2024-11-26 07:37:44.873663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.922 [2024-11-26 07:37:44.873838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.922 [2024-11-26 07:37:44.873847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.922 [2024-11-26 07:37:44.873854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.922 [2024-11-26 07:37:44.873862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.922 [2024-11-26 07:37:44.885815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.922 [2024-11-26 07:37:44.886198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.922 [2024-11-26 07:37:44.886215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.922 [2024-11-26 07:37:44.886223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.922 [2024-11-26 07:37:44.886386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.922 [2024-11-26 07:37:44.886550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.922 [2024-11-26 07:37:44.886559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.922 [2024-11-26 07:37:44.886565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.922 [2024-11-26 07:37:44.886571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.922 [2024-11-26 07:37:44.898743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.922 [2024-11-26 07:37:44.899184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.922 [2024-11-26 07:37:44.899232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.922 [2024-11-26 07:37:44.899257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.922 [2024-11-26 07:37:44.899818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.922 [2024-11-26 07:37:44.900004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.922 [2024-11-26 07:37:44.900014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.922 [2024-11-26 07:37:44.900021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.922 [2024-11-26 07:37:44.900029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.922 [2024-11-26 07:37:44.911637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.922 [2024-11-26 07:37:44.912005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.922 [2024-11-26 07:37:44.912023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.922 [2024-11-26 07:37:44.912032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.922 [2024-11-26 07:37:44.912211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.922 [2024-11-26 07:37:44.912374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.922 [2024-11-26 07:37:44.912388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.922 [2024-11-26 07:37:44.912394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.922 [2024-11-26 07:37:44.912401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.922 [2024-11-26 07:37:44.924556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.922 [2024-11-26 07:37:44.924980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.922 [2024-11-26 07:37:44.924997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.922 [2024-11-26 07:37:44.925006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.922 [2024-11-26 07:37:44.925168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.922 [2024-11-26 07:37:44.925332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.922 [2024-11-26 07:37:44.925341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.922 [2024-11-26 07:37:44.925348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.922 [2024-11-26 07:37:44.925355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.922 [2024-11-26 07:37:44.937430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.922 [2024-11-26 07:37:44.937835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.922 [2024-11-26 07:37:44.937852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.922 [2024-11-26 07:37:44.937860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.922 [2024-11-26 07:37:44.938067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.922 [2024-11-26 07:37:44.938242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.923 [2024-11-26 07:37:44.938252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.923 [2024-11-26 07:37:44.938260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.923 [2024-11-26 07:37:44.938266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.923 [2024-11-26 07:37:44.950343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.923 [2024-11-26 07:37:44.950758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.923 [2024-11-26 07:37:44.950775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.923 [2024-11-26 07:37:44.950782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.923 [2024-11-26 07:37:44.950945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.923 [2024-11-26 07:37:44.951140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.923 [2024-11-26 07:37:44.951149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.923 [2024-11-26 07:37:44.951156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.923 [2024-11-26 07:37:44.951163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.923 [2024-11-26 07:37:44.963235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.923 [2024-11-26 07:37:44.963680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.923 [2024-11-26 07:37:44.963698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.923 [2024-11-26 07:37:44.963706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.923 [2024-11-26 07:37:44.963869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.923 [2024-11-26 07:37:44.964058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.923 [2024-11-26 07:37:44.964068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.923 [2024-11-26 07:37:44.964075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.923 [2024-11-26 07:37:44.964082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.923 [2024-11-26 07:37:44.976040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.923 [2024-11-26 07:37:44.976433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.923 [2024-11-26 07:37:44.976478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.923 [2024-11-26 07:37:44.976502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.923 [2024-11-26 07:37:44.976958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.923 [2024-11-26 07:37:44.977149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.923 [2024-11-26 07:37:44.977158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.923 [2024-11-26 07:37:44.977165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.923 [2024-11-26 07:37:44.977173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.923 [2024-11-26 07:37:44.988955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.923 [2024-11-26 07:37:44.989362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.923 [2024-11-26 07:37:44.989408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.923 [2024-11-26 07:37:44.989431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.923 [2024-11-26 07:37:44.990022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.923 [2024-11-26 07:37:44.990478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.923 [2024-11-26 07:37:44.990488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.923 [2024-11-26 07:37:44.990494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.923 [2024-11-26 07:37:44.990501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:16.923 [2024-11-26 07:37:45.001880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:16.923 [2024-11-26 07:37:45.002315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.923 [2024-11-26 07:37:45.002368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:16.923 [2024-11-26 07:37:45.002393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:16.923 [2024-11-26 07:37:45.002985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:16.923 [2024-11-26 07:37:45.003482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:16.923 [2024-11-26 07:37:45.003492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:16.923 [2024-11-26 07:37:45.003499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:16.923 [2024-11-26 07:37:45.003506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.183 [2024-11-26 07:37:45.015049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.183 [2024-11-26 07:37:45.015489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.183 [2024-11-26 07:37:45.015539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.183 [2024-11-26 07:37:45.015564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.183 [2024-11-26 07:37:45.016155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.183 [2024-11-26 07:37:45.016443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.183 [2024-11-26 07:37:45.016456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.183 [2024-11-26 07:37:45.016466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.183 [2024-11-26 07:37:45.016476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.183 [2024-11-26 07:37:45.028438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.184 [2024-11-26 07:37:45.028795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-11-26 07:37:45.028812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.184 [2024-11-26 07:37:45.028819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.184 [2024-11-26 07:37:45.029009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.184 [2024-11-26 07:37:45.029182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.184 [2024-11-26 07:37:45.029193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.184 [2024-11-26 07:37:45.029200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.184 [2024-11-26 07:37:45.029207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.184 [2024-11-26 07:37:45.041283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.184 [2024-11-26 07:37:45.041628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-11-26 07:37:45.041644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.184 [2024-11-26 07:37:45.041651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.184 [2024-11-26 07:37:45.041816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.184 [2024-11-26 07:37:45.042002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.184 [2024-11-26 07:37:45.042012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.184 [2024-11-26 07:37:45.042019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.184 [2024-11-26 07:37:45.042026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.184 [2024-11-26 07:37:45.054120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.184 [2024-11-26 07:37:45.054560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-11-26 07:37:45.054603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.184 [2024-11-26 07:37:45.054626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.184 [2024-11-26 07:37:45.055217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.184 [2024-11-26 07:37:45.055637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.184 [2024-11-26 07:37:45.055646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.184 [2024-11-26 07:37:45.055653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.184 [2024-11-26 07:37:45.055659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.184 [2024-11-26 07:37:45.067105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.184 [2024-11-26 07:37:45.067523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-11-26 07:37:45.067540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.184 [2024-11-26 07:37:45.067548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.184 [2024-11-26 07:37:45.067710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.184 [2024-11-26 07:37:45.067873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.184 [2024-11-26 07:37:45.067883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.184 [2024-11-26 07:37:45.067889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.184 [2024-11-26 07:37:45.067896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.184 [2024-11-26 07:37:45.079950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.184 [2024-11-26 07:37:45.080344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-11-26 07:37:45.080361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.184 [2024-11-26 07:37:45.080369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.184 [2024-11-26 07:37:45.080531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.184 [2024-11-26 07:37:45.080693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.184 [2024-11-26 07:37:45.080706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.184 [2024-11-26 07:37:45.080712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.184 [2024-11-26 07:37:45.080719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.184 [2024-11-26 07:37:45.092765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.184 [2024-11-26 07:37:45.093187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-11-26 07:37:45.093232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.184 [2024-11-26 07:37:45.093257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.184 [2024-11-26 07:37:45.093833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.184 [2024-11-26 07:37:45.094432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.184 [2024-11-26 07:37:45.094460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.184 [2024-11-26 07:37:45.094482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.184 [2024-11-26 07:37:45.094501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.184 [2024-11-26 07:37:45.105572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.184 [2024-11-26 07:37:45.106023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-11-26 07:37:45.106067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.184 [2024-11-26 07:37:45.106091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.184 [2024-11-26 07:37:45.106364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.184 [2024-11-26 07:37:45.106538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.184 [2024-11-26 07:37:45.106548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.184 [2024-11-26 07:37:45.106556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.184 [2024-11-26 07:37:45.106563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.184 [2024-11-26 07:37:45.118774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.184 [2024-11-26 07:37:45.119228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-11-26 07:37:45.119246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.184 [2024-11-26 07:37:45.119255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.184 [2024-11-26 07:37:45.119428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.184 [2024-11-26 07:37:45.119623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.184 [2024-11-26 07:37:45.119633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.184 [2024-11-26 07:37:45.119640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.184 [2024-11-26 07:37:45.119647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.184 [2024-11-26 07:37:45.131718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.184 [2024-11-26 07:37:45.131999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-11-26 07:37:45.132017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.184 [2024-11-26 07:37:45.132024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.184 [2024-11-26 07:37:45.132187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.184 [2024-11-26 07:37:45.132350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.184 [2024-11-26 07:37:45.132359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.184 [2024-11-26 07:37:45.132365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.184 [2024-11-26 07:37:45.132372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.184 [2024-11-26 07:37:45.144587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.184 [2024-11-26 07:37:45.145000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.184 [2024-11-26 07:37:45.145018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.184 [2024-11-26 07:37:45.145026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.184 [2024-11-26 07:37:45.145189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.184 [2024-11-26 07:37:45.145352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.184 [2024-11-26 07:37:45.145362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.184 [2024-11-26 07:37:45.145369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.185 [2024-11-26 07:37:45.145375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.185 [2024-11-26 07:37:45.157430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.185 [2024-11-26 07:37:45.157861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-11-26 07:37:45.157903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.185 [2024-11-26 07:37:45.157927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.185 [2024-11-26 07:37:45.158520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.185 [2024-11-26 07:37:45.158855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.185 [2024-11-26 07:37:45.158864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.185 [2024-11-26 07:37:45.158871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.185 [2024-11-26 07:37:45.158878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.185 [2024-11-26 07:37:45.170267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.185 [2024-11-26 07:37:45.170688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-11-26 07:37:45.170752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.185 [2024-11-26 07:37:45.170776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.185 [2024-11-26 07:37:45.171368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.185 [2024-11-26 07:37:45.171920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.185 [2024-11-26 07:37:45.171929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.185 [2024-11-26 07:37:45.171936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.185 [2024-11-26 07:37:45.171942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.185 [2024-11-26 07:37:45.183064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.185 [2024-11-26 07:37:45.183479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-11-26 07:37:45.183496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.185 [2024-11-26 07:37:45.183504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.185 [2024-11-26 07:37:45.183667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.185 [2024-11-26 07:37:45.183830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.185 [2024-11-26 07:37:45.183840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.185 [2024-11-26 07:37:45.183846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.185 [2024-11-26 07:37:45.183852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.185 [2024-11-26 07:37:45.195971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.185 [2024-11-26 07:37:45.196310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-11-26 07:37:45.196327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.185 [2024-11-26 07:37:45.196335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.185 [2024-11-26 07:37:45.196497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.185 [2024-11-26 07:37:45.196660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.185 [2024-11-26 07:37:45.196670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.185 [2024-11-26 07:37:45.196676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.185 [2024-11-26 07:37:45.196683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.185 [2024-11-26 07:37:45.208787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.185 [2024-11-26 07:37:45.209210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-11-26 07:37:45.209259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.185 [2024-11-26 07:37:45.209283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.185 [2024-11-26 07:37:45.209791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.185 [2024-11-26 07:37:45.209960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.185 [2024-11-26 07:37:45.209969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.185 [2024-11-26 07:37:45.209992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.185 [2024-11-26 07:37:45.209999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.185 [2024-11-26 07:37:45.221575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.185 [2024-11-26 07:37:45.222016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-11-26 07:37:45.222060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.185 [2024-11-26 07:37:45.222103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.185 [2024-11-26 07:37:45.222640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.185 [2024-11-26 07:37:45.222804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.185 [2024-11-26 07:37:45.222812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.185 [2024-11-26 07:37:45.222818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.185 [2024-11-26 07:37:45.222824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.185 [2024-11-26 07:37:45.234415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.185 [2024-11-26 07:37:45.234839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-11-26 07:37:45.234856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.185 [2024-11-26 07:37:45.234864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.185 [2024-11-26 07:37:45.235053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.185 [2024-11-26 07:37:45.235227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.185 [2024-11-26 07:37:45.235237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.185 [2024-11-26 07:37:45.235243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.185 [2024-11-26 07:37:45.235250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.185 [2024-11-26 07:37:45.247270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.185 [2024-11-26 07:37:45.247643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-11-26 07:37:45.247660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.185 [2024-11-26 07:37:45.247667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.185 [2024-11-26 07:37:45.247830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.185 [2024-11-26 07:37:45.248017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.185 [2024-11-26 07:37:45.248031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.185 [2024-11-26 07:37:45.248038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.185 [2024-11-26 07:37:45.248045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.185 [2024-11-26 07:37:45.260104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.185 [2024-11-26 07:37:45.260500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-11-26 07:37:45.260518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.185 [2024-11-26 07:37:45.260525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.185 [2024-11-26 07:37:45.260688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.185 [2024-11-26 07:37:45.260850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.185 [2024-11-26 07:37:45.260860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.185 [2024-11-26 07:37:45.260866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.185 [2024-11-26 07:37:45.260872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.185 [2024-11-26 07:37:45.273115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.185 [2024-11-26 07:37:45.273491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.185 [2024-11-26 07:37:45.273508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.185 [2024-11-26 07:37:45.273515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.185 [2024-11-26 07:37:45.273701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.185 [2024-11-26 07:37:45.273874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.185 [2024-11-26 07:37:45.273885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.186 [2024-11-26 07:37:45.273891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.186 [2024-11-26 07:37:45.273898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.445 [2024-11-26 07:37:45.286111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.445 [2024-11-26 07:37:45.286532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.445 [2024-11-26 07:37:45.286548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.445 [2024-11-26 07:37:45.286556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.445 [2024-11-26 07:37:45.286719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.445 [2024-11-26 07:37:45.286882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.445 [2024-11-26 07:37:45.286891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.445 [2024-11-26 07:37:45.286897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.445 [2024-11-26 07:37:45.286904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.445 [2024-11-26 07:37:45.299019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.445 [2024-11-26 07:37:45.299433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.445 [2024-11-26 07:37:45.299449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.445 [2024-11-26 07:37:45.299457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.445 [2024-11-26 07:37:45.299619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.446 [2024-11-26 07:37:45.299782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.446 [2024-11-26 07:37:45.299791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.446 [2024-11-26 07:37:45.299797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.446 [2024-11-26 07:37:45.299804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.446 [2024-11-26 07:37:45.311920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.446 [2024-11-26 07:37:45.312332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.446 [2024-11-26 07:37:45.312349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.446 [2024-11-26 07:37:45.312356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.446 [2024-11-26 07:37:45.312519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.446 [2024-11-26 07:37:45.312682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.446 [2024-11-26 07:37:45.312691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.446 [2024-11-26 07:37:45.312698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.446 [2024-11-26 07:37:45.312704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.446 [2024-11-26 07:37:45.324811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.446 [2024-11-26 07:37:45.325252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.446 [2024-11-26 07:37:45.325297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.446 [2024-11-26 07:37:45.325320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.446 [2024-11-26 07:37:45.325904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.446 [2024-11-26 07:37:45.326093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.446 [2024-11-26 07:37:45.326108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.446 [2024-11-26 07:37:45.326115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.446 [2024-11-26 07:37:45.326121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.446 [2024-11-26 07:37:45.337664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.446 [2024-11-26 07:37:45.338093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.446 [2024-11-26 07:37:45.338145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.446 [2024-11-26 07:37:45.338168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.446 [2024-11-26 07:37:45.338745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.446 [2024-11-26 07:37:45.339293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.446 [2024-11-26 07:37:45.339303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.446 [2024-11-26 07:37:45.339310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.446 [2024-11-26 07:37:45.339317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.446 [2024-11-26 07:37:45.351212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.446 [2024-11-26 07:37:45.351635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.446 [2024-11-26 07:37:45.351652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.446 [2024-11-26 07:37:45.351659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.446 [2024-11-26 07:37:45.351825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.446 [2024-11-26 07:37:45.352016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.446 [2024-11-26 07:37:45.352026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.446 [2024-11-26 07:37:45.352032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.446 [2024-11-26 07:37:45.352039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.446 [2024-11-26 07:37:45.364087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.446 [2024-11-26 07:37:45.364428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.446 [2024-11-26 07:37:45.364444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.446 [2024-11-26 07:37:45.364452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.446 [2024-11-26 07:37:45.364615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.446 [2024-11-26 07:37:45.364778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.446 [2024-11-26 07:37:45.364789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.446 [2024-11-26 07:37:45.364796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.446 [2024-11-26 07:37:45.364803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.446 [2024-11-26 07:37:45.377191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.446 [2024-11-26 07:37:45.377630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.446 [2024-11-26 07:37:45.377674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.446 [2024-11-26 07:37:45.377701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.446 [2024-11-26 07:37:45.378303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.446 [2024-11-26 07:37:45.378494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.446 [2024-11-26 07:37:45.378504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.446 [2024-11-26 07:37:45.378511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.446 [2024-11-26 07:37:45.378518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.446 [2024-11-26 07:37:45.390215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.446 [2024-11-26 07:37:45.390676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.446 [2024-11-26 07:37:45.390693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.446 [2024-11-26 07:37:45.390701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.446 [2024-11-26 07:37:45.390865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.446 [2024-11-26 07:37:45.391053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.446 [2024-11-26 07:37:45.391064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.446 [2024-11-26 07:37:45.391070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.446 [2024-11-26 07:37:45.391078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.446 [2024-11-26 07:37:45.403194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.446 [2024-11-26 07:37:45.403485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.446 [2024-11-26 07:37:45.403501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.446 [2024-11-26 07:37:45.403509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.446 [2024-11-26 07:37:45.403672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.446 [2024-11-26 07:37:45.403835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.446 [2024-11-26 07:37:45.403844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.446 [2024-11-26 07:37:45.403851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.446 [2024-11-26 07:37:45.403857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.446 [2024-11-26 07:37:45.416116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.446 [2024-11-26 07:37:45.416532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.446 [2024-11-26 07:37:45.416577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.446 [2024-11-26 07:37:45.416601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.446 [2024-11-26 07:37:45.417113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.446 [2024-11-26 07:37:45.417278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.446 [2024-11-26 07:37:45.417293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.446 [2024-11-26 07:37:45.417299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.446 [2024-11-26 07:37:45.417307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.446 [2024-11-26 07:37:45.428933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.446 [2024-11-26 07:37:45.429333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.446 [2024-11-26 07:37:45.429350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.447 [2024-11-26 07:37:45.429358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.447 [2024-11-26 07:37:45.429520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.447 [2024-11-26 07:37:45.429683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.447 [2024-11-26 07:37:45.429692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.447 [2024-11-26 07:37:45.429699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.447 [2024-11-26 07:37:45.429705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.447 [2024-11-26 07:37:45.441800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.447 [2024-11-26 07:37:45.442164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.447 [2024-11-26 07:37:45.442208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.447 [2024-11-26 07:37:45.442231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.447 [2024-11-26 07:37:45.442808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.447 [2024-11-26 07:37:45.443404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.447 [2024-11-26 07:37:45.443431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.447 [2024-11-26 07:37:45.443459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.447 [2024-11-26 07:37:45.443466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.447 [2024-11-26 07:37:45.454759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.447 [2024-11-26 07:37:45.455196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.447 [2024-11-26 07:37:45.455240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.447 [2024-11-26 07:37:45.455264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.447 [2024-11-26 07:37:45.455703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.447 [2024-11-26 07:37:45.455868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.447 [2024-11-26 07:37:45.455877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.447 [2024-11-26 07:37:45.455884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.447 [2024-11-26 07:37:45.455891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.447 [2024-11-26 07:37:45.467721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.447 [2024-11-26 07:37:45.468143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.447 [2024-11-26 07:37:45.468198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.447 [2024-11-26 07:37:45.468222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.447 [2024-11-26 07:37:45.468744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.447 [2024-11-26 07:37:45.468907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.447 [2024-11-26 07:37:45.468915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.447 [2024-11-26 07:37:45.468922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.447 [2024-11-26 07:37:45.468928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.447 [2024-11-26 07:37:45.480697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.447 [2024-11-26 07:37:45.481070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.447 [2024-11-26 07:37:45.481088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.447 [2024-11-26 07:37:45.481097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.447 [2024-11-26 07:37:45.481273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.447 [2024-11-26 07:37:45.481437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.447 [2024-11-26 07:37:45.481447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.447 [2024-11-26 07:37:45.481454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.447 [2024-11-26 07:37:45.481460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.447 [2024-11-26 07:37:45.493551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.447 [2024-11-26 07:37:45.493908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.447 [2024-11-26 07:37:45.493925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.447 [2024-11-26 07:37:45.493933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.447 [2024-11-26 07:37:45.494110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.447 [2024-11-26 07:37:45.494291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.447 [2024-11-26 07:37:45.494300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.447 [2024-11-26 07:37:45.494307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.447 [2024-11-26 07:37:45.494313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.447 [2024-11-26 07:37:45.506478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.447 [2024-11-26 07:37:45.506822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.447 [2024-11-26 07:37:45.506844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.447 [2024-11-26 07:37:45.506852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.447 [2024-11-26 07:37:45.507039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.447 [2024-11-26 07:37:45.507212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.447 [2024-11-26 07:37:45.507222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.447 [2024-11-26 07:37:45.507230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.447 [2024-11-26 07:37:45.507237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.447 [2024-11-26 07:37:45.519322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.447 [2024-11-26 07:37:45.519684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.447 [2024-11-26 07:37:45.519729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.447 [2024-11-26 07:37:45.519753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.447 [2024-11-26 07:37:45.520342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.447 [2024-11-26 07:37:45.520810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.447 [2024-11-26 07:37:45.520819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.447 [2024-11-26 07:37:45.520826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.447 [2024-11-26 07:37:45.520832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.447 [2024-11-26 07:37:45.532296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.447 [2024-11-26 07:37:45.532713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.447 [2024-11-26 07:37:45.532755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.447 [2024-11-26 07:37:45.532780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.447 [2024-11-26 07:37:45.533371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.447 [2024-11-26 07:37:45.533579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.447 [2024-11-26 07:37:45.533589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.447 [2024-11-26 07:37:45.533595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.447 [2024-11-26 07:37:45.533602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.707 [2024-11-26 07:37:45.545436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.707 [2024-11-26 07:37:45.545875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.707 [2024-11-26 07:37:45.545920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.707 [2024-11-26 07:37:45.545943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.707 [2024-11-26 07:37:45.546471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.707 [2024-11-26 07:37:45.546645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.707 [2024-11-26 07:37:45.546655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.707 [2024-11-26 07:37:45.546662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.707 [2024-11-26 07:37:45.546669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.707 [2024-11-26 07:37:45.558323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.707 [2024-11-26 07:37:45.558754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.707 [2024-11-26 07:37:45.558805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.707 [2024-11-26 07:37:45.558829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.707 [2024-11-26 07:37:45.559405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.707 [2024-11-26 07:37:45.559570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.707 [2024-11-26 07:37:45.559580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.707 [2024-11-26 07:37:45.559586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.707 [2024-11-26 07:37:45.559592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.707 [2024-11-26 07:37:45.571306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.707 [2024-11-26 07:37:45.571751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.707 [2024-11-26 07:37:45.571797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.707 [2024-11-26 07:37:45.571821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.707 [2024-11-26 07:37:45.572414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.707 [2024-11-26 07:37:45.572777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.707 [2024-11-26 07:37:45.572786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.708 [2024-11-26 07:37:45.572793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.708 [2024-11-26 07:37:45.572799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.708 [2024-11-26 07:37:45.584276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.708 [2024-11-26 07:37:45.584609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.708 [2024-11-26 07:37:45.584626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.708 [2024-11-26 07:37:45.584633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.708 [2024-11-26 07:37:45.584795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.708 [2024-11-26 07:37:45.584965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.708 [2024-11-26 07:37:45.584978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.708 [2024-11-26 07:37:45.584984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.708 [2024-11-26 07:37:45.584991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.708 [2024-11-26 07:37:45.597140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.708 [2024-11-26 07:37:45.597429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.708 [2024-11-26 07:37:45.597446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.708 [2024-11-26 07:37:45.597453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.708 [2024-11-26 07:37:45.597615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.708 [2024-11-26 07:37:45.597778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.708 [2024-11-26 07:37:45.597787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.708 [2024-11-26 07:37:45.597794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.708 [2024-11-26 07:37:45.597800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.708 [2024-11-26 07:37:45.610075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.708 [2024-11-26 07:37:45.610425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.708 [2024-11-26 07:37:45.610442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.708 [2024-11-26 07:37:45.610451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.708 [2024-11-26 07:37:45.610624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.708 [2024-11-26 07:37:45.610796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.708 [2024-11-26 07:37:45.610806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.708 [2024-11-26 07:37:45.610813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.708 [2024-11-26 07:37:45.610819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.708 [2024-11-26 07:37:45.623214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.708 [2024-11-26 07:37:45.623636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.708 [2024-11-26 07:37:45.623654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.708 [2024-11-26 07:37:45.623662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.708 [2024-11-26 07:37:45.623838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.708 [2024-11-26 07:37:45.624024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.708 [2024-11-26 07:37:45.624035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.708 [2024-11-26 07:37:45.624042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.708 [2024-11-26 07:37:45.624050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.708 [2024-11-26 07:37:45.636244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.708 [2024-11-26 07:37:45.636552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.708 [2024-11-26 07:37:45.636598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.708 [2024-11-26 07:37:45.636621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.708 [2024-11-26 07:37:45.637107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.708 [2024-11-26 07:37:45.637286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.708 [2024-11-26 07:37:45.637296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.708 [2024-11-26 07:37:45.637303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.708 [2024-11-26 07:37:45.637310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.708 [2024-11-26 07:37:45.649255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.708 [2024-11-26 07:37:45.649740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.708 [2024-11-26 07:37:45.649785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.708 [2024-11-26 07:37:45.649808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.708 [2024-11-26 07:37:45.650357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.708 [2024-11-26 07:37:45.650522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.708 [2024-11-26 07:37:45.650532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.708 [2024-11-26 07:37:45.650540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.708 [2024-11-26 07:37:45.650546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.708 [2024-11-26 07:37:45.662178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.708 [2024-11-26 07:37:45.662676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.708 [2024-11-26 07:37:45.662720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.708 [2024-11-26 07:37:45.662744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.708 [2024-11-26 07:37:45.663336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.708 [2024-11-26 07:37:45.663919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.708 [2024-11-26 07:37:45.663945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.708 [2024-11-26 07:37:45.663978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.708 [2024-11-26 07:37:45.663999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.708 [2024-11-26 07:37:45.675225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.708 [2024-11-26 07:37:45.675507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.708 [2024-11-26 07:37:45.675528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.708 [2024-11-26 07:37:45.675535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.708 [2024-11-26 07:37:45.675698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.708 [2024-11-26 07:37:45.675862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.708 [2024-11-26 07:37:45.675872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.708 [2024-11-26 07:37:45.675878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.708 [2024-11-26 07:37:45.675884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.708 [2024-11-26 07:37:45.688119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.708 [2024-11-26 07:37:45.688460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.708 [2024-11-26 07:37:45.688505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.708 [2024-11-26 07:37:45.688529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.708 [2024-11-26 07:37:45.689064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.708 [2024-11-26 07:37:45.689239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.708 [2024-11-26 07:37:45.689248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.708 [2024-11-26 07:37:45.689256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.708 [2024-11-26 07:37:45.689263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.708 [2024-11-26 07:37:45.701037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.708 [2024-11-26 07:37:45.701334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.708 [2024-11-26 07:37:45.701352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.708 [2024-11-26 07:37:45.701360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.708 [2024-11-26 07:37:45.701531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.708 [2024-11-26 07:37:45.701704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.709 [2024-11-26 07:37:45.701713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.709 [2024-11-26 07:37:45.701720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.709 [2024-11-26 07:37:45.701727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.709 [2024-11-26 07:37:45.713961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.709 [2024-11-26 07:37:45.714346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.709 [2024-11-26 07:37:45.714364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.709 [2024-11-26 07:37:45.714372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.709 [2024-11-26 07:37:45.714548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.709 [2024-11-26 07:37:45.714721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.709 [2024-11-26 07:37:45.714731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.709 [2024-11-26 07:37:45.714737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.709 [2024-11-26 07:37:45.714744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.709 [2024-11-26 07:37:45.726936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.709 [2024-11-26 07:37:45.727277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.709 [2024-11-26 07:37:45.727294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.709 [2024-11-26 07:37:45.727301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.709 [2024-11-26 07:37:45.727464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.709 [2024-11-26 07:37:45.727626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.709 [2024-11-26 07:37:45.727636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.709 [2024-11-26 07:37:45.727643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.709 [2024-11-26 07:37:45.727650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.709 [2024-11-26 07:37:45.740021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.709 [2024-11-26 07:37:45.740324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.709 [2024-11-26 07:37:45.740341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.709 [2024-11-26 07:37:45.740349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.709 [2024-11-26 07:37:45.740512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.709 [2024-11-26 07:37:45.740675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.709 [2024-11-26 07:37:45.740685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.709 [2024-11-26 07:37:45.740691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.709 [2024-11-26 07:37:45.740698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.709 [2024-11-26 07:37:45.752833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.709 [2024-11-26 07:37:45.753190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.709 [2024-11-26 07:37:45.753248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.709 [2024-11-26 07:37:45.753272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.709 [2024-11-26 07:37:45.753848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.709 [2024-11-26 07:37:45.754071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.709 [2024-11-26 07:37:45.754085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.709 [2024-11-26 07:37:45.754092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.709 [2024-11-26 07:37:45.754099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.709 [2024-11-26 07:37:45.765851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.709 [2024-11-26 07:37:45.766205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.709 [2024-11-26 07:37:45.766223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.709 [2024-11-26 07:37:45.766231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.709 [2024-11-26 07:37:45.766403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.709 [2024-11-26 07:37:45.766576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.709 [2024-11-26 07:37:45.766586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.709 [2024-11-26 07:37:45.766592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.709 [2024-11-26 07:37:45.766599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.709 [2024-11-26 07:37:45.778801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.709 [2024-11-26 07:37:45.779158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.709 [2024-11-26 07:37:45.779175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.709 [2024-11-26 07:37:45.779183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.709 [2024-11-26 07:37:45.779346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.709 [2024-11-26 07:37:45.779509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.709 [2024-11-26 07:37:45.779519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.709 [2024-11-26 07:37:45.779526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.709 [2024-11-26 07:37:45.779533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.709 [2024-11-26 07:37:45.791831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.709 [2024-11-26 07:37:45.792204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.709 [2024-11-26 07:37:45.792248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.709 [2024-11-26 07:37:45.792272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.709 [2024-11-26 07:37:45.792851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.709 [2024-11-26 07:37:45.793074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.709 [2024-11-26 07:37:45.793084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.709 [2024-11-26 07:37:45.793090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.709 [2024-11-26 07:37:45.793097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.969 6966.50 IOPS, 27.21 MiB/s [2024-11-26T06:37:46.069Z] [2024-11-26 07:37:45.806115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.969 [2024-11-26 07:37:45.806467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.969 [2024-11-26 07:37:45.806486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.969 [2024-11-26 07:37:45.806495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.969 [2024-11-26 07:37:45.806672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.969 [2024-11-26 07:37:45.806849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.969 [2024-11-26 07:37:45.806859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.969 [2024-11-26 07:37:45.806866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.969 [2024-11-26 07:37:45.806873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.969 [2024-11-26 07:37:45.819000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.969 [2024-11-26 07:37:45.819386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.969 [2024-11-26 07:37:45.819404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.969 [2024-11-26 07:37:45.819412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.969 [2024-11-26 07:37:45.819584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.969 [2024-11-26 07:37:45.819758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.969 [2024-11-26 07:37:45.819768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.969 [2024-11-26 07:37:45.819775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.969 [2024-11-26 07:37:45.819781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.969 [2024-11-26 07:37:45.831907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.969 [2024-11-26 07:37:45.832330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.969 [2024-11-26 07:37:45.832348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.969 [2024-11-26 07:37:45.832355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.969 [2024-11-26 07:37:45.832944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.969 [2024-11-26 07:37:45.833493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.969 [2024-11-26 07:37:45.833503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.969 [2024-11-26 07:37:45.833510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.969 [2024-11-26 07:37:45.833517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.969 [2024-11-26 07:37:45.844778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.969 [2024-11-26 07:37:45.845095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.969 [2024-11-26 07:37:45.845117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.969 [2024-11-26 07:37:45.845126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.969 [2024-11-26 07:37:45.845297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.969 [2024-11-26 07:37:45.845470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.969 [2024-11-26 07:37:45.845479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.969 [2024-11-26 07:37:45.845486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.969 [2024-11-26 07:37:45.845493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.969 [2024-11-26 07:37:45.857650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.969 [2024-11-26 07:37:45.858075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.969 [2024-11-26 07:37:45.858093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.969 [2024-11-26 07:37:45.858101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.969 [2024-11-26 07:37:45.858282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.969 [2024-11-26 07:37:45.858446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.969 [2024-11-26 07:37:45.858456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.969 [2024-11-26 07:37:45.858462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.969 [2024-11-26 07:37:45.858469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.969 [2024-11-26 07:37:45.870563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.969 [2024-11-26 07:37:45.870934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.969 [2024-11-26 07:37:45.870959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.969 [2024-11-26 07:37:45.870968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.969 [2024-11-26 07:37:45.871142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.970 [2024-11-26 07:37:45.871320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.970 [2024-11-26 07:37:45.871329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.970 [2024-11-26 07:37:45.871336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.970 [2024-11-26 07:37:45.871342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.970 [2024-11-26 07:37:45.883447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.970 [2024-11-26 07:37:45.883817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.970 [2024-11-26 07:37:45.883834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.970 [2024-11-26 07:37:45.883843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.970 [2024-11-26 07:37:45.884033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.970 [2024-11-26 07:37:45.884207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.970 [2024-11-26 07:37:45.884218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.970 [2024-11-26 07:37:45.884224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.970 [2024-11-26 07:37:45.884231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.970 [2024-11-26 07:37:45.896576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.970 [2024-11-26 07:37:45.897038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.970 [2024-11-26 07:37:45.897057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.970 [2024-11-26 07:37:45.897066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.970 [2024-11-26 07:37:45.897250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.970 [2024-11-26 07:37:45.897424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.970 [2024-11-26 07:37:45.897434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.970 [2024-11-26 07:37:45.897441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.970 [2024-11-26 07:37:45.897448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.970 [2024-11-26 07:37:45.909516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.970 [2024-11-26 07:37:45.909951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.970 [2024-11-26 07:37:45.909969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.970 [2024-11-26 07:37:45.909977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.970 [2024-11-26 07:37:45.910140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.970 [2024-11-26 07:37:45.910303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.970 [2024-11-26 07:37:45.910313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.970 [2024-11-26 07:37:45.910319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.970 [2024-11-26 07:37:45.910326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.970 [2024-11-26 07:37:45.922668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.970 [2024-11-26 07:37:45.923035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.970 [2024-11-26 07:37:45.923054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.970 [2024-11-26 07:37:45.923064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.970 [2024-11-26 07:37:45.923241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.970 [2024-11-26 07:37:45.923419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.970 [2024-11-26 07:37:45.923432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.970 [2024-11-26 07:37:45.923439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.970 [2024-11-26 07:37:45.923446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.970 [2024-11-26 07:37:45.935795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.970 [2024-11-26 07:37:45.936165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.970 [2024-11-26 07:37:45.936183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.970 [2024-11-26 07:37:45.936192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.970 [2024-11-26 07:37:45.936369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.970 [2024-11-26 07:37:45.936547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.970 [2024-11-26 07:37:45.936557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.970 [2024-11-26 07:37:45.936564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.970 [2024-11-26 07:37:45.936570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.970 [2024-11-26 07:37:45.948939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.970 [2024-11-26 07:37:45.949387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.970 [2024-11-26 07:37:45.949405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.970 [2024-11-26 07:37:45.949413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.970 [2024-11-26 07:37:45.949595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.970 [2024-11-26 07:37:45.949805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.970 [2024-11-26 07:37:45.949816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.970 [2024-11-26 07:37:45.949825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.970 [2024-11-26 07:37:45.949832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.970 [2024-11-26 07:37:45.962320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.970 [2024-11-26 07:37:45.962705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.970 [2024-11-26 07:37:45.962724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.970 [2024-11-26 07:37:45.962733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.970 [2024-11-26 07:37:45.962927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.970 [2024-11-26 07:37:45.963142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.970 [2024-11-26 07:37:45.963153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.970 [2024-11-26 07:37:45.963160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.970 [2024-11-26 07:37:45.963171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.970 [2024-11-26 07:37:45.975696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.970 [2024-11-26 07:37:45.976157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.970 [2024-11-26 07:37:45.976176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.970 [2024-11-26 07:37:45.976186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.970 [2024-11-26 07:37:45.976370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.970 [2024-11-26 07:37:45.976554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.970 [2024-11-26 07:37:45.976565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.970 [2024-11-26 07:37:45.976572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.970 [2024-11-26 07:37:45.976578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.970 [2024-11-26 07:37:45.988760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.970 [2024-11-26 07:37:45.989205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.970 [2024-11-26 07:37:45.989251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.970 [2024-11-26 07:37:45.989274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.970 [2024-11-26 07:37:45.989741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.970 [2024-11-26 07:37:45.989919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.971 [2024-11-26 07:37:45.989929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.971 [2024-11-26 07:37:45.989936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.971 [2024-11-26 07:37:45.989943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.971 [2024-11-26 07:37:46.001726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.971 [2024-11-26 07:37:46.002130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.971 [2024-11-26 07:37:46.002148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.971 [2024-11-26 07:37:46.002156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.971 [2024-11-26 07:37:46.002318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.971 [2024-11-26 07:37:46.002482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.971 [2024-11-26 07:37:46.002492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.971 [2024-11-26 07:37:46.002499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.971 [2024-11-26 07:37:46.002505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.971 [2024-11-26 07:37:46.014612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.971 [2024-11-26 07:37:46.015028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.971 [2024-11-26 07:37:46.015049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.971 [2024-11-26 07:37:46.015057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.971 [2024-11-26 07:37:46.015220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.971 [2024-11-26 07:37:46.015383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.971 [2024-11-26 07:37:46.015393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.971 [2024-11-26 07:37:46.015399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.971 [2024-11-26 07:37:46.015406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.971 [2024-11-26 07:37:46.027499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.971 [2024-11-26 07:37:46.027912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.971 [2024-11-26 07:37:46.027929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.971 [2024-11-26 07:37:46.027936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.971 [2024-11-26 07:37:46.028105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.971 [2024-11-26 07:37:46.028269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.971 [2024-11-26 07:37:46.028279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.971 [2024-11-26 07:37:46.028285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.971 [2024-11-26 07:37:46.028292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.971 [2024-11-26 07:37:46.040326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.971 [2024-11-26 07:37:46.040724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.971 [2024-11-26 07:37:46.040741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.971 [2024-11-26 07:37:46.040747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.971 [2024-11-26 07:37:46.040910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.971 [2024-11-26 07:37:46.041080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.971 [2024-11-26 07:37:46.041090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.971 [2024-11-26 07:37:46.041096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.971 [2024-11-26 07:37:46.041103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:17.971 [2024-11-26 07:37:46.053188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:17.971 [2024-11-26 07:37:46.053617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.971 [2024-11-26 07:37:46.053662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:17.971 [2024-11-26 07:37:46.053687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:17.971 [2024-11-26 07:37:46.054106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:17.971 [2024-11-26 07:37:46.054291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:17.971 [2024-11-26 07:37:46.054301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:17.971 [2024-11-26 07:37:46.054309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:17.971 [2024-11-26 07:37:46.054316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.230 [2024-11-26 07:37:46.066249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.230 [2024-11-26 07:37:46.066687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.230 [2024-11-26 07:37:46.066705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.230 [2024-11-26 07:37:46.066713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.230 [2024-11-26 07:37:46.066890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.230 [2024-11-26 07:37:46.067073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.230 [2024-11-26 07:37:46.067084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.230 [2024-11-26 07:37:46.067091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.230 [2024-11-26 07:37:46.067098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.231 [2024-11-26 07:37:46.079162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.231 [2024-11-26 07:37:46.079560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.231 [2024-11-26 07:37:46.079577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.231 [2024-11-26 07:37:46.079585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.231 [2024-11-26 07:37:46.079747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.231 [2024-11-26 07:37:46.079910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.231 [2024-11-26 07:37:46.079919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.231 [2024-11-26 07:37:46.079926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.231 [2024-11-26 07:37:46.079932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.231 [2024-11-26 07:37:46.092016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.231 [2024-11-26 07:37:46.092437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.231 [2024-11-26 07:37:46.092481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.231 [2024-11-26 07:37:46.092505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.231 [2024-11-26 07:37:46.093055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.231 [2024-11-26 07:37:46.093221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.231 [2024-11-26 07:37:46.093233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.231 [2024-11-26 07:37:46.093240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.231 [2024-11-26 07:37:46.093246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.231 [2024-11-26 07:37:46.104894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.231 [2024-11-26 07:37:46.105316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.231 [2024-11-26 07:37:46.105334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.231 [2024-11-26 07:37:46.105342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.231 [2024-11-26 07:37:46.105505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.231 [2024-11-26 07:37:46.105667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.231 [2024-11-26 07:37:46.105677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.231 [2024-11-26 07:37:46.105683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.231 [2024-11-26 07:37:46.105690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.231 [2024-11-26 07:37:46.117779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.231 [2024-11-26 07:37:46.118191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.231 [2024-11-26 07:37:46.118235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.231 [2024-11-26 07:37:46.118259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.231 [2024-11-26 07:37:46.118835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.231 [2024-11-26 07:37:46.119280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.231 [2024-11-26 07:37:46.119290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.231 [2024-11-26 07:37:46.119296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.231 [2024-11-26 07:37:46.119303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.231 [2024-11-26 07:37:46.130694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.231 [2024-11-26 07:37:46.131104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.231 [2024-11-26 07:37:46.131152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.231 [2024-11-26 07:37:46.131176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.231 [2024-11-26 07:37:46.131753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.231 [2024-11-26 07:37:46.131963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.231 [2024-11-26 07:37:46.131972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.231 [2024-11-26 07:37:46.131979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.231 [2024-11-26 07:37:46.131989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.231 [2024-11-26 07:37:46.143535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.231 [2024-11-26 07:37:46.143965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.231 [2024-11-26 07:37:46.144010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.231 [2024-11-26 07:37:46.144034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.231 [2024-11-26 07:37:46.144610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.231 [2024-11-26 07:37:46.145196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.231 [2024-11-26 07:37:46.145206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.231 [2024-11-26 07:37:46.145213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.231 [2024-11-26 07:37:46.145221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.231 [2024-11-26 07:37:46.156647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.231 [2024-11-26 07:37:46.157055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.231 [2024-11-26 07:37:46.157074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.231 [2024-11-26 07:37:46.157082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.231 [2024-11-26 07:37:46.157254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.231 [2024-11-26 07:37:46.157434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.231 [2024-11-26 07:37:46.157444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.231 [2024-11-26 07:37:46.157451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.231 [2024-11-26 07:37:46.157458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.231 [2024-11-26 07:37:46.169520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.231 [2024-11-26 07:37:46.169957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.231 [2024-11-26 07:37:46.169975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.231 [2024-11-26 07:37:46.169983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.231 [2024-11-26 07:37:46.170145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.231 [2024-11-26 07:37:46.170308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.231 [2024-11-26 07:37:46.170318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.231 [2024-11-26 07:37:46.170324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.231 [2024-11-26 07:37:46.170330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.231 [2024-11-26 07:37:46.182406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.231 [2024-11-26 07:37:46.182847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.231 [2024-11-26 07:37:46.182900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.231 [2024-11-26 07:37:46.182924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.231 [2024-11-26 07:37:46.183428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.231 [2024-11-26 07:37:46.183594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.231 [2024-11-26 07:37:46.183603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.231 [2024-11-26 07:37:46.183610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.231 [2024-11-26 07:37:46.183616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.231 [2024-11-26 07:37:46.195237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.231 [2024-11-26 07:37:46.195654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.231 [2024-11-26 07:37:46.195671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.231 [2024-11-26 07:37:46.195679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.231 [2024-11-26 07:37:46.195842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.231 [2024-11-26 07:37:46.196010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.231 [2024-11-26 07:37:46.196020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.231 [2024-11-26 07:37:46.196026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.231 [2024-11-26 07:37:46.196033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.231 [2024-11-26 07:37:46.208109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.231 [2024-11-26 07:37:46.208531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.231 [2024-11-26 07:37:46.208548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.231 [2024-11-26 07:37:46.208557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.231 [2024-11-26 07:37:46.208720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.232 [2024-11-26 07:37:46.208883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.232 [2024-11-26 07:37:46.208893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.232 [2024-11-26 07:37:46.208899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.232 [2024-11-26 07:37:46.208905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.232 [2024-11-26 07:37:46.220992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.232 [2024-11-26 07:37:46.221350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.232 [2024-11-26 07:37:46.221367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.232 [2024-11-26 07:37:46.221374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.232 [2024-11-26 07:37:46.221538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.232 [2024-11-26 07:37:46.221701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.232 [2024-11-26 07:37:46.221710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.232 [2024-11-26 07:37:46.221716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.232 [2024-11-26 07:37:46.221723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.232 [2024-11-26 07:37:46.233905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.232 [2024-11-26 07:37:46.234325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.232 [2024-11-26 07:37:46.234364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.232 [2024-11-26 07:37:46.234390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.232 [2024-11-26 07:37:46.234982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.232 [2024-11-26 07:37:46.235564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.232 [2024-11-26 07:37:46.235588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.232 [2024-11-26 07:37:46.235596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.232 [2024-11-26 07:37:46.235603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.232 [2024-11-26 07:37:46.246789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.232 [2024-11-26 07:37:46.247209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.232 [2024-11-26 07:37:46.247226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.232 [2024-11-26 07:37:46.247234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.232 [2024-11-26 07:37:46.247395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.232 [2024-11-26 07:37:46.247557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.232 [2024-11-26 07:37:46.247567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.232 [2024-11-26 07:37:46.247574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.232 [2024-11-26 07:37:46.247580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.232 [2024-11-26 07:37:46.259674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.232 [2024-11-26 07:37:46.260031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.232 [2024-11-26 07:37:46.260076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.232 [2024-11-26 07:37:46.260099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.232 [2024-11-26 07:37:46.260538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.232 [2024-11-26 07:37:46.260702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.232 [2024-11-26 07:37:46.260715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.232 [2024-11-26 07:37:46.260721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.232 [2024-11-26 07:37:46.260728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.232 [2024-11-26 07:37:46.272565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.232 [2024-11-26 07:37:46.273005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.232 [2024-11-26 07:37:46.273050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.232 [2024-11-26 07:37:46.273074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.232 [2024-11-26 07:37:46.273508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.232 [2024-11-26 07:37:46.273671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.232 [2024-11-26 07:37:46.273681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.232 [2024-11-26 07:37:46.273687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.232 [2024-11-26 07:37:46.273694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.232 [2024-11-26 07:37:46.285476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.232 [2024-11-26 07:37:46.285908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.232 [2024-11-26 07:37:46.285964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.232 [2024-11-26 07:37:46.285989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.232 [2024-11-26 07:37:46.286499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.232 [2024-11-26 07:37:46.286663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.232 [2024-11-26 07:37:46.286672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.232 [2024-11-26 07:37:46.286679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.232 [2024-11-26 07:37:46.286685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.232 [2024-11-26 07:37:46.298302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.232 [2024-11-26 07:37:46.298694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.232 [2024-11-26 07:37:46.298711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.232 [2024-11-26 07:37:46.298719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.232 [2024-11-26 07:37:46.298881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.232 [2024-11-26 07:37:46.299051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.232 [2024-11-26 07:37:46.299062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.232 [2024-11-26 07:37:46.299068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.232 [2024-11-26 07:37:46.299075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.232 [2024-11-26 07:37:46.311164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.232 [2024-11-26 07:37:46.311600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.232 [2024-11-26 07:37:46.311644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.232 [2024-11-26 07:37:46.311668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.232 [2024-11-26 07:37:46.312257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.232 [2024-11-26 07:37:46.312700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.232 [2024-11-26 07:37:46.312710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.232 [2024-11-26 07:37:46.312716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.232 [2024-11-26 07:37:46.312722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.493 [2024-11-26 07:37:46.324368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.493 [2024-11-26 07:37:46.324797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.493 [2024-11-26 07:37:46.324815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.493 [2024-11-26 07:37:46.324823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.493 [2024-11-26 07:37:46.325005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.493 [2024-11-26 07:37:46.325183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.493 [2024-11-26 07:37:46.325194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.493 [2024-11-26 07:37:46.325201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.493 [2024-11-26 07:37:46.325208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.493 [2024-11-26 07:37:46.337220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.493 [2024-11-26 07:37:46.337628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.493 [2024-11-26 07:37:46.337672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.493 [2024-11-26 07:37:46.337696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.493 [2024-11-26 07:37:46.338289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.493 [2024-11-26 07:37:46.338792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.493 [2024-11-26 07:37:46.338802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.493 [2024-11-26 07:37:46.338808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.493 [2024-11-26 07:37:46.338815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.493 [2024-11-26 07:37:46.350069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.493 [2024-11-26 07:37:46.350467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.493 [2024-11-26 07:37:46.350488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.493 [2024-11-26 07:37:46.350496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.493 [2024-11-26 07:37:46.350659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.493 [2024-11-26 07:37:46.350823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.493 [2024-11-26 07:37:46.350832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.493 [2024-11-26 07:37:46.350838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.493 [2024-11-26 07:37:46.350846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.493 [2024-11-26 07:37:46.362919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.493 [2024-11-26 07:37:46.363326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.493 [2024-11-26 07:37:46.363343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.493 [2024-11-26 07:37:46.363350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.493 [2024-11-26 07:37:46.363513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.493 [2024-11-26 07:37:46.363676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.493 [2024-11-26 07:37:46.363685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.493 [2024-11-26 07:37:46.363692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.493 [2024-11-26 07:37:46.363699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.493 [2024-11-26 07:37:46.375871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.493 [2024-11-26 07:37:46.376310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.493 [2024-11-26 07:37:46.376327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.493 [2024-11-26 07:37:46.376335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.493 [2024-11-26 07:37:46.376498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.493 [2024-11-26 07:37:46.376661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.493 [2024-11-26 07:37:46.376671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.493 [2024-11-26 07:37:46.376678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.493 [2024-11-26 07:37:46.376684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.493 [2024-11-26 07:37:46.388764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.493 [2024-11-26 07:37:46.389101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.493 [2024-11-26 07:37:46.389119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.493 [2024-11-26 07:37:46.389126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.493 [2024-11-26 07:37:46.389293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.493 [2024-11-26 07:37:46.389456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.493 [2024-11-26 07:37:46.389465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.493 [2024-11-26 07:37:46.389472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.493 [2024-11-26 07:37:46.389478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.493 [2024-11-26 07:37:46.401553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.493 [2024-11-26 07:37:46.401922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.493 [2024-11-26 07:37:46.401939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.493 [2024-11-26 07:37:46.401953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.493 [2024-11-26 07:37:46.402115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.493 [2024-11-26 07:37:46.402279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.493 [2024-11-26 07:37:46.402288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.493 [2024-11-26 07:37:46.402295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.493 [2024-11-26 07:37:46.402302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.493 [2024-11-26 07:37:46.414622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.493 [2024-11-26 07:37:46.414976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.493 [2024-11-26 07:37:46.414995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.493 [2024-11-26 07:37:46.415003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.493 [2024-11-26 07:37:46.415181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.493 [2024-11-26 07:37:46.415359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.493 [2024-11-26 07:37:46.415369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.493 [2024-11-26 07:37:46.415376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.493 [2024-11-26 07:37:46.415383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.493 [2024-11-26 07:37:46.427757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.493 [2024-11-26 07:37:46.428192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.493 [2024-11-26 07:37:46.428210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.493 [2024-11-26 07:37:46.428218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.493 [2024-11-26 07:37:46.428395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.493 [2024-11-26 07:37:46.428574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.493 [2024-11-26 07:37:46.428587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.493 [2024-11-26 07:37:46.428594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.493 [2024-11-26 07:37:46.428601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.493 [2024-11-26 07:37:46.440942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.493 [2024-11-26 07:37:46.441394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.493 [2024-11-26 07:37:46.441437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.493 [2024-11-26 07:37:46.441461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.493 [2024-11-26 07:37:46.441930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.493 [2024-11-26 07:37:46.442114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.494 [2024-11-26 07:37:46.442124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.494 [2024-11-26 07:37:46.442131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.494 [2024-11-26 07:37:46.442138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.494 [2024-11-26 07:37:46.454008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.494 [2024-11-26 07:37:46.454431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.494 [2024-11-26 07:37:46.454476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.494 [2024-11-26 07:37:46.454499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.494 [2024-11-26 07:37:46.455017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.494 [2024-11-26 07:37:46.455203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.494 [2024-11-26 07:37:46.455212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.494 [2024-11-26 07:37:46.455219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.494 [2024-11-26 07:37:46.455227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.494 [2024-11-26 07:37:46.467066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.494 [2024-11-26 07:37:46.467464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.494 [2024-11-26 07:37:46.467481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.494 [2024-11-26 07:37:46.467489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.494 [2024-11-26 07:37:46.467652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.494 [2024-11-26 07:37:46.467815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.494 [2024-11-26 07:37:46.467825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.494 [2024-11-26 07:37:46.467831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.494 [2024-11-26 07:37:46.467837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.494 [2024-11-26 07:37:46.479895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.494 [2024-11-26 07:37:46.480330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.494 [2024-11-26 07:37:46.480347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.494 [2024-11-26 07:37:46.480355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.494 [2024-11-26 07:37:46.480517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.494 [2024-11-26 07:37:46.480680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.494 [2024-11-26 07:37:46.480689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.494 [2024-11-26 07:37:46.480696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.494 [2024-11-26 07:37:46.480702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.494 [2024-11-26 07:37:46.492732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.494 [2024-11-26 07:37:46.493149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.494 [2024-11-26 07:37:46.493166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.494 [2024-11-26 07:37:46.493173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.494 [2024-11-26 07:37:46.493336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.494 [2024-11-26 07:37:46.493499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.494 [2024-11-26 07:37:46.493509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.494 [2024-11-26 07:37:46.493515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.494 [2024-11-26 07:37:46.493522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.494 [2024-11-26 07:37:46.505603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.494 [2024-11-26 07:37:46.506001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.494 [2024-11-26 07:37:46.506018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.494 [2024-11-26 07:37:46.506025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.494 [2024-11-26 07:37:46.506187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.494 [2024-11-26 07:37:46.506351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.494 [2024-11-26 07:37:46.506360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.494 [2024-11-26 07:37:46.506366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.494 [2024-11-26 07:37:46.506373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.494 [2024-11-26 07:37:46.518403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.494 [2024-11-26 07:37:46.518837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.494 [2024-11-26 07:37:46.518890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.494 [2024-11-26 07:37:46.518915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.494 [2024-11-26 07:37:46.519508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.494 [2024-11-26 07:37:46.520101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.494 [2024-11-26 07:37:46.520128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.494 [2024-11-26 07:37:46.520150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.494 [2024-11-26 07:37:46.520171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.494 [2024-11-26 07:37:46.531324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.494 [2024-11-26 07:37:46.531758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.494 [2024-11-26 07:37:46.531803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.494 [2024-11-26 07:37:46.531826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.494 [2024-11-26 07:37:46.532418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.494 [2024-11-26 07:37:46.532988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.494 [2024-11-26 07:37:46.532998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.494 [2024-11-26 07:37:46.533004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.494 [2024-11-26 07:37:46.533011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.494 [2024-11-26 07:37:46.544255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.494 [2024-11-26 07:37:46.544663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.494 [2024-11-26 07:37:46.544707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.494 [2024-11-26 07:37:46.544730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.494 [2024-11-26 07:37:46.545145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.494 [2024-11-26 07:37:46.545310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.494 [2024-11-26 07:37:46.545319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.494 [2024-11-26 07:37:46.545326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.494 [2024-11-26 07:37:46.545332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.494 [2024-11-26 07:37:46.557198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.494 [2024-11-26 07:37:46.557613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.494 [2024-11-26 07:37:46.557631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.494 [2024-11-26 07:37:46.557638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.494 [2024-11-26 07:37:46.557804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.494 [2024-11-26 07:37:46.557972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.494 [2024-11-26 07:37:46.557982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.494 [2024-11-26 07:37:46.557988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.494 [2024-11-26 07:37:46.557995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.494 [2024-11-26 07:37:46.570008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.494 [2024-11-26 07:37:46.570431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.494 [2024-11-26 07:37:46.570448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.494 [2024-11-26 07:37:46.570456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.494 [2024-11-26 07:37:46.570618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.494 [2024-11-26 07:37:46.570782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.495 [2024-11-26 07:37:46.570792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.495 [2024-11-26 07:37:46.570798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.495 [2024-11-26 07:37:46.570805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.495 [2024-11-26 07:37:46.583080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.495 [2024-11-26 07:37:46.583428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.495 [2024-11-26 07:37:46.583446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.495 [2024-11-26 07:37:46.583455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.495 [2024-11-26 07:37:46.583632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.495 [2024-11-26 07:37:46.583811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.495 [2024-11-26 07:37:46.583820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.495 [2024-11-26 07:37:46.583829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.495 [2024-11-26 07:37:46.583835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.754 [2024-11-26 07:37:46.596014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.754 [2024-11-26 07:37:46.596430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.754 [2024-11-26 07:37:46.596474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.754 [2024-11-26 07:37:46.596498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.754 [2024-11-26 07:37:46.596984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.754 [2024-11-26 07:37:46.597159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.754 [2024-11-26 07:37:46.597183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.754 [2024-11-26 07:37:46.597191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.754 [2024-11-26 07:37:46.597198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.754 [2024-11-26 07:37:46.609011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.755 [2024-11-26 07:37:46.609280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.755 [2024-11-26 07:37:46.609298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.755 [2024-11-26 07:37:46.609306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.755 [2024-11-26 07:37:46.609479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.755 [2024-11-26 07:37:46.609651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.755 [2024-11-26 07:37:46.609660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.755 [2024-11-26 07:37:46.609667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.755 [2024-11-26 07:37:46.609674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.755 [2024-11-26 07:37:46.621922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.755 [2024-11-26 07:37:46.622275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.755 [2024-11-26 07:37:46.622291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.755 [2024-11-26 07:37:46.622298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.755 [2024-11-26 07:37:46.622460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.755 [2024-11-26 07:37:46.622624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.755 [2024-11-26 07:37:46.622633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.755 [2024-11-26 07:37:46.622639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.755 [2024-11-26 07:37:46.622645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.755 [2024-11-26 07:37:46.634838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.755 [2024-11-26 07:37:46.635291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.755 [2024-11-26 07:37:46.635332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.755 [2024-11-26 07:37:46.635358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.755 [2024-11-26 07:37:46.635933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.755 [2024-11-26 07:37:46.636192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.755 [2024-11-26 07:37:46.636201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.755 [2024-11-26 07:37:46.636210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.755 [2024-11-26 07:37:46.636217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.755 [2024-11-26 07:37:46.647683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.755 [2024-11-26 07:37:46.648102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.755 [2024-11-26 07:37:46.648152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.755 [2024-11-26 07:37:46.648176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.755 [2024-11-26 07:37:46.648754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.755 [2024-11-26 07:37:46.649345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.755 [2024-11-26 07:37:46.649382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.755 [2024-11-26 07:37:46.649390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.755 [2024-11-26 07:37:46.649397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.755 [2024-11-26 07:37:46.660519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.755 [2024-11-26 07:37:46.660872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.755 [2024-11-26 07:37:46.660889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.755 [2024-11-26 07:37:46.660898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.755 [2024-11-26 07:37:46.661065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.755 [2024-11-26 07:37:46.661230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.755 [2024-11-26 07:37:46.661239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.755 [2024-11-26 07:37:46.661246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.755 [2024-11-26 07:37:46.661252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.755 [2024-11-26 07:37:46.673486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.755 [2024-11-26 07:37:46.673846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.755 [2024-11-26 07:37:46.673863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.755 [2024-11-26 07:37:46.673870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.755 [2024-11-26 07:37:46.674048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.755 [2024-11-26 07:37:46.674221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.755 [2024-11-26 07:37:46.674232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.755 [2024-11-26 07:37:46.674240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.755 [2024-11-26 07:37:46.674247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.755 [2024-11-26 07:37:46.686343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.755 [2024-11-26 07:37:46.686708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.755 [2024-11-26 07:37:46.686730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.755 [2024-11-26 07:37:46.686738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.755 [2024-11-26 07:37:46.686900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.755 [2024-11-26 07:37:46.687071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.755 [2024-11-26 07:37:46.687081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.755 [2024-11-26 07:37:46.687088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.755 [2024-11-26 07:37:46.687094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.755 [2024-11-26 07:37:46.699161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.755 [2024-11-26 07:37:46.699570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.755 [2024-11-26 07:37:46.699614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.755 [2024-11-26 07:37:46.699638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.755 [2024-11-26 07:37:46.700232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.755 [2024-11-26 07:37:46.700814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.755 [2024-11-26 07:37:46.700842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.755 [2024-11-26 07:37:46.700849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.755 [2024-11-26 07:37:46.700857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.755 [2024-11-26 07:37:46.712041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.755 [2024-11-26 07:37:46.712459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.755 [2024-11-26 07:37:46.712477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.755 [2024-11-26 07:37:46.712484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.755 [2024-11-26 07:37:46.712647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.755 [2024-11-26 07:37:46.712810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.755 [2024-11-26 07:37:46.712820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.755 [2024-11-26 07:37:46.712826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.755 [2024-11-26 07:37:46.712833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.755 [2024-11-26 07:37:46.724964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.755 [2024-11-26 07:37:46.725306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.756 [2024-11-26 07:37:46.725323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.756 [2024-11-26 07:37:46.725332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.756 [2024-11-26 07:37:46.725507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.756 [2024-11-26 07:37:46.725686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.756 [2024-11-26 07:37:46.725695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.756 [2024-11-26 07:37:46.725702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.756 [2024-11-26 07:37:46.725709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.756 [2024-11-26 07:37:46.738079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.756 [2024-11-26 07:37:46.738534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.756 [2024-11-26 07:37:46.738580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.756 [2024-11-26 07:37:46.738604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.756 [2024-11-26 07:37:46.739198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.756 [2024-11-26 07:37:46.739706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.756 [2024-11-26 07:37:46.739716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.756 [2024-11-26 07:37:46.739723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.756 [2024-11-26 07:37:46.739729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.756 [2024-11-26 07:37:46.750984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.756 [2024-11-26 07:37:46.751323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.756 [2024-11-26 07:37:46.751340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.756 [2024-11-26 07:37:46.751347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.756 [2024-11-26 07:37:46.751510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.756 [2024-11-26 07:37:46.751673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.756 [2024-11-26 07:37:46.751683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.756 [2024-11-26 07:37:46.751689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.756 [2024-11-26 07:37:46.751696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.756 [2024-11-26 07:37:46.763783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.756 [2024-11-26 07:37:46.764216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.756 [2024-11-26 07:37:46.764261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.756 [2024-11-26 07:37:46.764285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.756 [2024-11-26 07:37:46.764653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.756 [2024-11-26 07:37:46.764817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.756 [2024-11-26 07:37:46.764829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.756 [2024-11-26 07:37:46.764835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.756 [2024-11-26 07:37:46.764842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.756 [2024-11-26 07:37:46.776717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.756 [2024-11-26 07:37:46.777133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.756 [2024-11-26 07:37:46.777151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.756 [2024-11-26 07:37:46.777159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.756 [2024-11-26 07:37:46.777321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.756 [2024-11-26 07:37:46.777485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.756 [2024-11-26 07:37:46.777494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.756 [2024-11-26 07:37:46.777500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.756 [2024-11-26 07:37:46.777507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.756 [2024-11-26 07:37:46.789530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.756 [2024-11-26 07:37:46.789964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.756 [2024-11-26 07:37:46.790009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.756 [2024-11-26 07:37:46.790033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.756 [2024-11-26 07:37:46.790452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.756 [2024-11-26 07:37:46.790616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.756 [2024-11-26 07:37:46.790626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.756 [2024-11-26 07:37:46.790632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.756 [2024-11-26 07:37:46.790638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.756 [2024-11-26 07:37:46.802410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.756 [2024-11-26 07:37:46.802829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.756 [2024-11-26 07:37:46.802846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.756 [2024-11-26 07:37:46.802854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.756 [2024-11-26 07:37:46.803022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.756 [2024-11-26 07:37:46.803186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.756 [2024-11-26 07:37:46.803195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.756 [2024-11-26 07:37:46.803202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.756 [2024-11-26 07:37:46.803208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.756 5573.20 IOPS, 21.77 MiB/s [2024-11-26T06:37:46.856Z] [2024-11-26 07:37:46.815254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.756 [2024-11-26 07:37:46.815512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.756 [2024-11-26 07:37:46.815529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.756 [2024-11-26 07:37:46.815537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.756 [2024-11-26 07:37:46.815700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.756 [2024-11-26 07:37:46.815863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.756 [2024-11-26 07:37:46.815872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.756 [2024-11-26 07:37:46.815878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.756 [2024-11-26 07:37:46.815885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.756 [2024-11-26 07:37:46.828122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.756 [2024-11-26 07:37:46.828543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.756 [2024-11-26 07:37:46.828600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.756 [2024-11-26 07:37:46.828623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.756 [2024-11-26 07:37:46.829215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.756 [2024-11-26 07:37:46.829799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.756 [2024-11-26 07:37:46.829809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.756 [2024-11-26 07:37:46.829815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.756 [2024-11-26 07:37:46.829821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:18.756 [2024-11-26 07:37:46.840921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:18.756 [2024-11-26 07:37:46.841347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.756 [2024-11-26 07:37:46.841364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:18.756 [2024-11-26 07:37:46.841372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:18.756 [2024-11-26 07:37:46.841535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:18.757 [2024-11-26 07:37:46.841698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:18.757 [2024-11-26 07:37:46.841708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:18.757 [2024-11-26 07:37:46.841714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:18.757 [2024-11-26 07:37:46.841722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.017 [2024-11-26 07:37:46.854049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.017 [2024-11-26 07:37:46.854505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-26 07:37:46.854557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.017 [2024-11-26 07:37:46.854581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.017 [2024-11-26 07:37:46.855127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.017 [2024-11-26 07:37:46.855302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.017 [2024-11-26 07:37:46.855311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.017 [2024-11-26 07:37:46.855317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.017 [2024-11-26 07:37:46.855324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.017 [2024-11-26 07:37:46.867457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.017 [2024-11-26 07:37:46.867867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-26 07:37:46.867885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.017 [2024-11-26 07:37:46.867893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.017 [2024-11-26 07:37:46.868065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.017 [2024-11-26 07:37:46.868234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.017 [2024-11-26 07:37:46.868244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.017 [2024-11-26 07:37:46.868251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.017 [2024-11-26 07:37:46.868259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.017 [2024-11-26 07:37:46.880264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.017 [2024-11-26 07:37:46.880685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-26 07:37:46.880702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.017 [2024-11-26 07:37:46.880709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.017 [2024-11-26 07:37:46.880871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.017 [2024-11-26 07:37:46.881041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.017 [2024-11-26 07:37:46.881051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.017 [2024-11-26 07:37:46.881057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.017 [2024-11-26 07:37:46.881063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.017 [2024-11-26 07:37:46.893227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.017 [2024-11-26 07:37:46.893564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-26 07:37:46.893581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.017 [2024-11-26 07:37:46.893589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.017 [2024-11-26 07:37:46.893755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.017 [2024-11-26 07:37:46.893918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.017 [2024-11-26 07:37:46.893928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.017 [2024-11-26 07:37:46.893934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.017 [2024-11-26 07:37:46.893941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.017 [2024-11-26 07:37:46.906113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.017 [2024-11-26 07:37:46.906548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-26 07:37:46.906594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.017 [2024-11-26 07:37:46.906618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.017 [2024-11-26 07:37:46.907071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.017 [2024-11-26 07:37:46.907237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.017 [2024-11-26 07:37:46.907246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.017 [2024-11-26 07:37:46.907253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.017 [2024-11-26 07:37:46.907259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.017 [2024-11-26 07:37:46.918966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.017 [2024-11-26 07:37:46.919405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-26 07:37:46.919464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.017 [2024-11-26 07:37:46.919489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.017 [2024-11-26 07:37:46.920035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.017 [2024-11-26 07:37:46.920200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.017 [2024-11-26 07:37:46.920209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.017 [2024-11-26 07:37:46.920216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.017 [2024-11-26 07:37:46.920222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.017 [2024-11-26 07:37:46.932054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.017 [2024-11-26 07:37:46.932491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.017 [2024-11-26 07:37:46.932509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.017 [2024-11-26 07:37:46.932518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.017 [2024-11-26 07:37:46.932695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.017 [2024-11-26 07:37:46.932873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.018 [2024-11-26 07:37:46.932887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.018 [2024-11-26 07:37:46.932895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.018 [2024-11-26 07:37:46.932902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.018 [2024-11-26 07:37:46.944886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.018 [2024-11-26 07:37:46.945312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-26 07:37:46.945329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.018 [2024-11-26 07:37:46.945338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.018 [2024-11-26 07:37:46.945499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.018 [2024-11-26 07:37:46.945662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.018 [2024-11-26 07:37:46.945671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.018 [2024-11-26 07:37:46.945678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.018 [2024-11-26 07:37:46.945684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.018 [2024-11-26 07:37:46.957874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.018 [2024-11-26 07:37:46.958275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-26 07:37:46.958293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.018 [2024-11-26 07:37:46.958301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.018 [2024-11-26 07:37:46.958463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.018 [2024-11-26 07:37:46.958626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.018 [2024-11-26 07:37:46.958635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.018 [2024-11-26 07:37:46.958642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.018 [2024-11-26 07:37:46.958648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.018 [2024-11-26 07:37:46.970693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.018 [2024-11-26 07:37:46.971109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-26 07:37:46.971153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.018 [2024-11-26 07:37:46.971179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.018 [2024-11-26 07:37:46.971757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.018 [2024-11-26 07:37:46.971984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.018 [2024-11-26 07:37:46.971994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.018 [2024-11-26 07:37:46.972000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.018 [2024-11-26 07:37:46.972010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.018 [2024-11-26 07:37:46.983775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.018 [2024-11-26 07:37:46.984234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-26 07:37:46.984253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.018 [2024-11-26 07:37:46.984261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.018 [2024-11-26 07:37:46.984437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.018 [2024-11-26 07:37:46.984618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.018 [2024-11-26 07:37:46.984628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.018 [2024-11-26 07:37:46.984635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.018 [2024-11-26 07:37:46.984641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.018 [2024-11-26 07:37:46.996813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.018 [2024-11-26 07:37:46.997236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-26 07:37:46.997253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.018 [2024-11-26 07:37:46.997262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.018 [2024-11-26 07:37:46.997438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.018 [2024-11-26 07:37:46.997616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.018 [2024-11-26 07:37:46.997626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.018 [2024-11-26 07:37:46.997633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.018 [2024-11-26 07:37:46.997640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.018 [2024-11-26 07:37:47.009997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.018 [2024-11-26 07:37:47.010354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-26 07:37:47.010372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.018 [2024-11-26 07:37:47.010380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.018 [2024-11-26 07:37:47.010556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.018 [2024-11-26 07:37:47.010735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.018 [2024-11-26 07:37:47.010745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.018 [2024-11-26 07:37:47.010752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.018 [2024-11-26 07:37:47.010758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.018 [2024-11-26 07:37:47.023122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.018 [2024-11-26 07:37:47.023514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-26 07:37:47.023534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.018 [2024-11-26 07:37:47.023543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.018 [2024-11-26 07:37:47.023721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.018 [2024-11-26 07:37:47.023898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.018 [2024-11-26 07:37:47.023908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.018 [2024-11-26 07:37:47.023916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.018 [2024-11-26 07:37:47.023922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.018 [2024-11-26 07:37:47.036262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.018 [2024-11-26 07:37:47.036677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-26 07:37:47.036695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.018 [2024-11-26 07:37:47.036703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.018 [2024-11-26 07:37:47.036880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.018 [2024-11-26 07:37:47.037064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.018 [2024-11-26 07:37:47.037074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.018 [2024-11-26 07:37:47.037081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.018 [2024-11-26 07:37:47.037087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.018 [2024-11-26 07:37:47.049292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.018 [2024-11-26 07:37:47.049722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-26 07:37:47.049739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.018 [2024-11-26 07:37:47.049747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.018 [2024-11-26 07:37:47.049924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.018 [2024-11-26 07:37:47.050110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.018 [2024-11-26 07:37:47.050121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.018 [2024-11-26 07:37:47.050128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.018 [2024-11-26 07:37:47.050135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.018 [2024-11-26 07:37:47.062341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.018 [2024-11-26 07:37:47.062701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.018 [2024-11-26 07:37:47.062719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.018 [2024-11-26 07:37:47.062727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.018 [2024-11-26 07:37:47.062907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.019 [2024-11-26 07:37:47.063091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.019 [2024-11-26 07:37:47.063102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.019 [2024-11-26 07:37:47.063109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.019 [2024-11-26 07:37:47.063116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.019 [2024-11-26 07:37:47.075461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.019 [2024-11-26 07:37:47.075888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-26 07:37:47.075906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.019 [2024-11-26 07:37:47.075914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.019 [2024-11-26 07:37:47.076097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.019 [2024-11-26 07:37:47.076276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.019 [2024-11-26 07:37:47.076285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.019 [2024-11-26 07:37:47.076292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.019 [2024-11-26 07:37:47.076300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.019 [2024-11-26 07:37:47.088642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.019 [2024-11-26 07:37:47.089071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-26 07:37:47.089116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.019 [2024-11-26 07:37:47.089140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.019 [2024-11-26 07:37:47.089616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.019 [2024-11-26 07:37:47.089795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.019 [2024-11-26 07:37:47.089805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.019 [2024-11-26 07:37:47.089812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.019 [2024-11-26 07:37:47.089819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.019 [2024-11-26 07:37:47.101697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.019 [2024-11-26 07:37:47.102081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.019 [2024-11-26 07:37:47.102126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.019 [2024-11-26 07:37:47.102150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.019 [2024-11-26 07:37:47.102612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.019 [2024-11-26 07:37:47.102786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.019 [2024-11-26 07:37:47.102799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.019 [2024-11-26 07:37:47.102806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.019 [2024-11-26 07:37:47.102813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.279 [2024-11-26 07:37:47.114746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.279 [2024-11-26 07:37:47.115166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.279 [2024-11-26 07:37:47.115211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.279 [2024-11-26 07:37:47.115235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.279 [2024-11-26 07:37:47.115689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.279 [2024-11-26 07:37:47.115868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.279 [2024-11-26 07:37:47.115878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.279 [2024-11-26 07:37:47.115885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.279 [2024-11-26 07:37:47.115891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.279 [2024-11-26 07:37:47.127675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.279 [2024-11-26 07:37:47.128036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.279 [2024-11-26 07:37:47.128053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.279 [2024-11-26 07:37:47.128061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.279 [2024-11-26 07:37:47.128223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.279 [2024-11-26 07:37:47.128387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.279 [2024-11-26 07:37:47.128396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.279 [2024-11-26 07:37:47.128403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.279 [2024-11-26 07:37:47.128409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.279 [2024-11-26 07:37:47.140701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.279 [2024-11-26 07:37:47.141138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.279 [2024-11-26 07:37:47.141183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.279 [2024-11-26 07:37:47.141206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.279 [2024-11-26 07:37:47.141709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.279 [2024-11-26 07:37:47.141874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.279 [2024-11-26 07:37:47.141883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.279 [2024-11-26 07:37:47.141890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.279 [2024-11-26 07:37:47.141900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.279 [2024-11-26 07:37:47.153562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.279 [2024-11-26 07:37:47.153975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.279 [2024-11-26 07:37:47.153993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.279 [2024-11-26 07:37:47.154001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.279 [2024-11-26 07:37:47.154164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.279 [2024-11-26 07:37:47.154327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.279 [2024-11-26 07:37:47.154336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.279 [2024-11-26 07:37:47.154342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.279 [2024-11-26 07:37:47.154349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.279 [2024-11-26 07:37:47.166396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.279 [2024-11-26 07:37:47.166814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.280 [2024-11-26 07:37:47.166832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.280 [2024-11-26 07:37:47.166842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.280 [2024-11-26 07:37:47.167019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.280 [2024-11-26 07:37:47.167193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.280 [2024-11-26 07:37:47.167203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.280 [2024-11-26 07:37:47.167210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.280 [2024-11-26 07:37:47.167217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.280 [2024-11-26 07:37:47.179240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.280 [2024-11-26 07:37:47.179695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.280 [2024-11-26 07:37:47.179739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.280 [2024-11-26 07:37:47.179762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.280 [2024-11-26 07:37:47.180358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.280 [2024-11-26 07:37:47.180736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.280 [2024-11-26 07:37:47.180746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.280 [2024-11-26 07:37:47.180754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.280 [2024-11-26 07:37:47.180761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.280 [2024-11-26 07:37:47.192229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.280 [2024-11-26 07:37:47.192621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.280 [2024-11-26 07:37:47.192642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.280 [2024-11-26 07:37:47.192650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.280 [2024-11-26 07:37:47.192822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.280 [2024-11-26 07:37:47.193003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.280 [2024-11-26 07:37:47.193014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.280 [2024-11-26 07:37:47.193021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.280 [2024-11-26 07:37:47.193028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.280 [2024-11-26 07:37:47.205336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.280 [2024-11-26 07:37:47.205811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.280 [2024-11-26 07:37:47.205855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.280 [2024-11-26 07:37:47.205879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.280 [2024-11-26 07:37:47.206472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.280 [2024-11-26 07:37:47.206925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.280 [2024-11-26 07:37:47.206935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.280 [2024-11-26 07:37:47.206941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.280 [2024-11-26 07:37:47.206953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.280 [2024-11-26 07:37:47.218149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.280 [2024-11-26 07:37:47.218485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.280 [2024-11-26 07:37:47.218529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.280 [2024-11-26 07:37:47.218553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.280 [2024-11-26 07:37:47.219143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.280 [2024-11-26 07:37:47.219730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.280 [2024-11-26 07:37:47.219755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.280 [2024-11-26 07:37:47.219778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.280 [2024-11-26 07:37:47.219784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.280 [2024-11-26 07:37:47.230986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.280 [2024-11-26 07:37:47.231391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.280 [2024-11-26 07:37:47.231408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.280 [2024-11-26 07:37:47.231416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.280 [2024-11-26 07:37:47.231582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.280 [2024-11-26 07:37:47.231745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.280 [2024-11-26 07:37:47.231754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.280 [2024-11-26 07:37:47.231761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.280 [2024-11-26 07:37:47.231767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.280 [2024-11-26 07:37:47.243887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.280 [2024-11-26 07:37:47.244327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.280 [2024-11-26 07:37:47.244373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.280 [2024-11-26 07:37:47.244398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.280 [2024-11-26 07:37:47.244959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.280 [2024-11-26 07:37:47.245139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.280 [2024-11-26 07:37:47.245150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.280 [2024-11-26 07:37:47.245157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.280 [2024-11-26 07:37:47.245164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.280 [2024-11-26 07:37:47.256879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.280 [2024-11-26 07:37:47.257256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.280 [2024-11-26 07:37:47.257301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.280 [2024-11-26 07:37:47.257324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.280 [2024-11-26 07:37:47.257830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.280 [2024-11-26 07:37:47.258000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.280 [2024-11-26 07:37:47.258010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.280 [2024-11-26 07:37:47.258016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.280 [2024-11-26 07:37:47.258023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.280 [2024-11-26 07:37:47.269770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.280 [2024-11-26 07:37:47.270131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.280 [2024-11-26 07:37:47.270149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.280 [2024-11-26 07:37:47.270157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.280 [2024-11-26 07:37:47.270329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.280 [2024-11-26 07:37:47.270501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.280 [2024-11-26 07:37:47.270515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.280 [2024-11-26 07:37:47.270523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.280 [2024-11-26 07:37:47.270531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.280 [2024-11-26 07:37:47.282681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.280 [2024-11-26 07:37:47.283043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.280 [2024-11-26 07:37:47.283060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.280 [2024-11-26 07:37:47.283068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.280 [2024-11-26 07:37:47.283230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.280 [2024-11-26 07:37:47.283393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.280 [2024-11-26 07:37:47.283403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.280 [2024-11-26 07:37:47.283409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.280 [2024-11-26 07:37:47.283416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.280 [2024-11-26 07:37:47.295521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.281 [2024-11-26 07:37:47.295972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.281 [2024-11-26 07:37:47.295990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.281 [2024-11-26 07:37:47.295998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.281 [2024-11-26 07:37:47.296160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.281 [2024-11-26 07:37:47.296324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.281 [2024-11-26 07:37:47.296334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.281 [2024-11-26 07:37:47.296341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.281 [2024-11-26 07:37:47.296347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.281 [2024-11-26 07:37:47.308389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.281 [2024-11-26 07:37:47.308815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.281 [2024-11-26 07:37:47.308832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.281 [2024-11-26 07:37:47.308839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.281 [2024-11-26 07:37:47.309007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.281 [2024-11-26 07:37:47.309172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.281 [2024-11-26 07:37:47.309182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.281 [2024-11-26 07:37:47.309188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.281 [2024-11-26 07:37:47.309200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.281 [2024-11-26 07:37:47.321302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.281 [2024-11-26 07:37:47.321689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.281 [2024-11-26 07:37:47.321706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.281 [2024-11-26 07:37:47.321713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.281 [2024-11-26 07:37:47.321875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.281 [2024-11-26 07:37:47.322045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.281 [2024-11-26 07:37:47.322055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.281 [2024-11-26 07:37:47.322061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.281 [2024-11-26 07:37:47.322069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 889341 Killed "${NVMF_APP[@]}" "$@" 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.281 [2024-11-26 07:37:47.334494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.281 [2024-11-26 07:37:47.334787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.281 [2024-11-26 07:37:47.334804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.281 [2024-11-26 07:37:47.334812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.281 [2024-11-26 07:37:47.334995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.281 [2024-11-26 07:37:47.335174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.281 [2024-11-26 07:37:47.335185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.281 [2024-11-26 07:37:47.335192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.281 [2024-11-26 07:37:47.335199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=890700 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 890700 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 890700 ']' 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.281 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.281 [2024-11-26 07:37:47.347574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.281 [2024-11-26 07:37:47.347964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.281 [2024-11-26 07:37:47.347980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.281 [2024-11-26 07:37:47.347989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.281 [2024-11-26 07:37:47.348165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.281 [2024-11-26 07:37:47.348342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.281 [2024-11-26 07:37:47.348350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.281 [2024-11-26 07:37:47.348358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.281 [2024-11-26 07:37:47.348365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.281 [2024-11-26 07:37:47.360716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.281 [2024-11-26 07:37:47.361104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.281 [2024-11-26 07:37:47.361122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.281 [2024-11-26 07:37:47.361132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.281 [2024-11-26 07:37:47.361309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.281 [2024-11-26 07:37:47.361488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.281 [2024-11-26 07:37:47.361499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.281 [2024-11-26 07:37:47.361506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.281 [2024-11-26 07:37:47.361514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.542 [2024-11-26 07:37:47.373923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.542 [2024-11-26 07:37:47.374265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.542 [2024-11-26 07:37:47.374283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.542 [2024-11-26 07:37:47.374292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.542 [2024-11-26 07:37:47.374463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.542 [2024-11-26 07:37:47.374636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.542 [2024-11-26 07:37:47.374647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.542 [2024-11-26 07:37:47.374653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.542 [2024-11-26 07:37:47.374661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.542 [2024-11-26 07:37:47.385112] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:28:19.542 [2024-11-26 07:37:47.385154] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.542 [2024-11-26 07:37:47.386998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.542 [2024-11-26 07:37:47.387385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.542 [2024-11-26 07:37:47.387402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.542 [2024-11-26 07:37:47.387411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.542 [2024-11-26 07:37:47.387583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.542 [2024-11-26 07:37:47.387755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.542 [2024-11-26 07:37:47.387765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.542 [2024-11-26 07:37:47.387773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.542 [2024-11-26 07:37:47.387780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.542 [2024-11-26 07:37:47.399977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.542 [2024-11-26 07:37:47.400338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.542 [2024-11-26 07:37:47.400356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.542 [2024-11-26 07:37:47.400364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.542 [2024-11-26 07:37:47.400537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.542 [2024-11-26 07:37:47.400711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.542 [2024-11-26 07:37:47.400721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.542 [2024-11-26 07:37:47.400728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.542 [2024-11-26 07:37:47.400735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.542 [2024-11-26 07:37:47.413025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.542 [2024-11-26 07:37:47.413455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.542 [2024-11-26 07:37:47.413473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.542 [2024-11-26 07:37:47.413481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.542 [2024-11-26 07:37:47.413653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.542 [2024-11-26 07:37:47.413827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.542 [2024-11-26 07:37:47.413837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.542 [2024-11-26 07:37:47.413844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.542 [2024-11-26 07:37:47.413851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.542 [2024-11-26 07:37:47.426186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.542 [2024-11-26 07:37:47.426556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.542 [2024-11-26 07:37:47.426574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.542 [2024-11-26 07:37:47.426582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.542 [2024-11-26 07:37:47.426759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.542 [2024-11-26 07:37:47.426936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.542 [2024-11-26 07:37:47.426952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.542 [2024-11-26 07:37:47.426961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.542 [2024-11-26 07:37:47.426969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.542 [2024-11-26 07:37:47.439353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.542 [2024-11-26 07:37:47.439659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.542 [2024-11-26 07:37:47.439676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.543 [2024-11-26 07:37:47.439686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.543 [2024-11-26 07:37:47.439864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.543 [2024-11-26 07:37:47.440055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.543 [2024-11-26 07:37:47.440066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.543 [2024-11-26 07:37:47.440073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.543 [2024-11-26 07:37:47.440081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.543 [2024-11-26 07:37:47.452377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:19.543 [2024-11-26 07:37:47.452423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.543 [2024-11-26 07:37:47.452845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.543 [2024-11-26 07:37:47.452864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.543 [2024-11-26 07:37:47.452873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.543 [2024-11-26 07:37:47.453056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.543 [2024-11-26 07:37:47.453234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.543 [2024-11-26 07:37:47.453244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.543 [2024-11-26 07:37:47.453251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.543 [2024-11-26 07:37:47.453259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.543 [2024-11-26 07:37:47.465613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.543 [2024-11-26 07:37:47.465992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.543 [2024-11-26 07:37:47.466016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.543 [2024-11-26 07:37:47.466026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.543 [2024-11-26 07:37:47.466199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.543 [2024-11-26 07:37:47.466373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.543 [2024-11-26 07:37:47.466383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.543 [2024-11-26 07:37:47.466390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.543 [2024-11-26 07:37:47.466397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.543 [2024-11-26 07:37:47.478603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.543 [2024-11-26 07:37:47.479037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.543 [2024-11-26 07:37:47.479056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.543 [2024-11-26 07:37:47.479064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.543 [2024-11-26 07:37:47.479237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.543 [2024-11-26 07:37:47.479411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.543 [2024-11-26 07:37:47.479420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.543 [2024-11-26 07:37:47.479427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.543 [2024-11-26 07:37:47.479434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.543 [2024-11-26 07:37:47.491623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.543 [2024-11-26 07:37:47.492035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.543 [2024-11-26 07:37:47.492053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.543 [2024-11-26 07:37:47.492061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.543 [2024-11-26 07:37:47.492234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.543 [2024-11-26 07:37:47.492407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.543 [2024-11-26 07:37:47.492416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.543 [2024-11-26 07:37:47.492423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.543 [2024-11-26 07:37:47.492430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.543 [2024-11-26 07:37:47.495373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.543 [2024-11-26 07:37:47.495401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.543 [2024-11-26 07:37:47.495408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.543 [2024-11-26 07:37:47.495414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.543 [2024-11-26 07:37:47.495420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.543 [2024-11-26 07:37:47.496755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.543 [2024-11-26 07:37:47.496846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.543 [2024-11-26 07:37:47.496848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.543 [2024-11-26 07:37:47.504825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.543 [2024-11-26 07:37:47.505292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.543 [2024-11-26 07:37:47.505312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.543 [2024-11-26 07:37:47.505322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.543 [2024-11-26 07:37:47.505501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.543 [2024-11-26 07:37:47.505680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.543 [2024-11-26 07:37:47.505690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.543 [2024-11-26 07:37:47.505699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.543 [2024-11-26 07:37:47.505708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.543 [2024-11-26 07:37:47.517885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.543 [2024-11-26 07:37:47.518348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.543 [2024-11-26 07:37:47.518370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.543 [2024-11-26 07:37:47.518379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.543 [2024-11-26 07:37:47.518557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.543 [2024-11-26 07:37:47.518737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.543 [2024-11-26 07:37:47.518747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.543 [2024-11-26 07:37:47.518755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.543 [2024-11-26 07:37:47.518763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.543 [2024-11-26 07:37:47.530954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.543 [2024-11-26 07:37:47.531415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.543 [2024-11-26 07:37:47.531436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.543 [2024-11-26 07:37:47.531446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.543 [2024-11-26 07:37:47.531626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.543 [2024-11-26 07:37:47.531807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.543 [2024-11-26 07:37:47.531816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.543 [2024-11-26 07:37:47.531826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.543 [2024-11-26 07:37:47.531834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.543 [2024-11-26 07:37:47.544034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.543 [2024-11-26 07:37:47.544484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.543 [2024-11-26 07:37:47.544505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.543 [2024-11-26 07:37:47.544515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.543 [2024-11-26 07:37:47.544694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.543 [2024-11-26 07:37:47.544873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.543 [2024-11-26 07:37:47.544883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.543 [2024-11-26 07:37:47.544891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.543 [2024-11-26 07:37:47.544899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.543 [2024-11-26 07:37:47.557092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.543 [2024-11-26 07:37:47.557549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.543 [2024-11-26 07:37:47.557570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.543 [2024-11-26 07:37:47.557579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.544 [2024-11-26 07:37:47.557758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.544 [2024-11-26 07:37:47.557937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.544 [2024-11-26 07:37:47.557954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.544 [2024-11-26 07:37:47.557963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.544 [2024-11-26 07:37:47.557971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.544 [2024-11-26 07:37:47.570165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.544 [2024-11-26 07:37:47.570582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.544 [2024-11-26 07:37:47.570599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.544 [2024-11-26 07:37:47.570608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.544 [2024-11-26 07:37:47.570786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.544 [2024-11-26 07:37:47.570971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.544 [2024-11-26 07:37:47.570981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.544 [2024-11-26 07:37:47.570989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.544 [2024-11-26 07:37:47.570997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.544 [2024-11-26 07:37:47.583330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.544 [2024-11-26 07:37:47.583744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.544 [2024-11-26 07:37:47.583762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.544 [2024-11-26 07:37:47.583775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.544 [2024-11-26 07:37:47.583958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.544 [2024-11-26 07:37:47.584138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.544 [2024-11-26 07:37:47.584148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.544 [2024-11-26 07:37:47.584155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.544 [2024-11-26 07:37:47.584162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.544 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.544 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:19.544 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.544 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.544 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.544 [2024-11-26 07:37:47.596517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.544 [2024-11-26 07:37:47.596955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.544 [2024-11-26 07:37:47.596974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.544 [2024-11-26 07:37:47.596982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.544 [2024-11-26 07:37:47.597160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.544 [2024-11-26 07:37:47.597339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.544 [2024-11-26 07:37:47.597349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.544 [2024-11-26 07:37:47.597356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.544 [2024-11-26 07:37:47.597365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.544 [2024-11-26 07:37:47.609558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.544 [2024-11-26 07:37:47.609988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.544 [2024-11-26 07:37:47.610007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.544 [2024-11-26 07:37:47.610015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.544 [2024-11-26 07:37:47.610193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.544 [2024-11-26 07:37:47.610371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.544 [2024-11-26 07:37:47.610381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.544 [2024-11-26 07:37:47.610389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.544 [2024-11-26 07:37:47.610395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.544 [2024-11-26 07:37:47.622746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.544 [2024-11-26 07:37:47.623103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.544 [2024-11-26 07:37:47.623121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.544 [2024-11-26 07:37:47.623130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.544 [2024-11-26 07:37:47.623307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.544 [2024-11-26 07:37:47.623486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.544 [2024-11-26 07:37:47.623496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.544 [2024-11-26 07:37:47.623503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.544 [2024-11-26 07:37:47.623510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.544 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.544 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.544 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.544 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.544 [2024-11-26 07:37:47.629373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.544 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.803 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:19.803 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.803 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.803 [2024-11-26 07:37:47.635856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.803 [2024-11-26 07:37:47.636203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-26 07:37:47.636221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.803 [2024-11-26 07:37:47.636230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.803 [2024-11-26 07:37:47.636407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.803 [2024-11-26 07:37:47.636586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.803 [2024-11-26 07:37:47.636596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.803 [2024-11-26 07:37:47.636603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.803 [2024-11-26 07:37:47.636610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.803 [2024-11-26 07:37:47.648954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.803 [2024-11-26 07:37:47.649390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-26 07:37:47.649408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.803 [2024-11-26 07:37:47.649416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.803 [2024-11-26 07:37:47.649594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.803 [2024-11-26 07:37:47.649771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.803 [2024-11-26 07:37:47.649786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.803 [2024-11-26 07:37:47.649797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.803 [2024-11-26 07:37:47.649805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.803 [2024-11-26 07:37:47.662015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.803 [2024-11-26 07:37:47.662376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.803 [2024-11-26 07:37:47.662394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.803 [2024-11-26 07:37:47.662403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.803 [2024-11-26 07:37:47.662581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.803 [2024-11-26 07:37:47.662769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.803 [2024-11-26 07:37:47.662779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.803 [2024-11-26 07:37:47.662786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.804 [2024-11-26 07:37:47.662793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.804 Malloc0 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.804 [2024-11-26 07:37:47.675166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.804 [2024-11-26 07:37:47.675542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 [2024-11-26 07:37:47.675560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.804 [2024-11-26 07:37:47.675568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.804 [2024-11-26 07:37:47.675746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.804 [2024-11-26 07:37:47.675926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.804 [2024-11-26 07:37:47.675936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.804 [2024-11-26 07:37:47.675943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.804 [2024-11-26 07:37:47.675956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.804 [2024-11-26 07:37:47.688298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.804 [2024-11-26 07:37:47.688733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.804 [2024-11-26 07:37:47.688751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb3500 with addr=10.0.0.2, port=4420 00:28:19.804 [2024-11-26 07:37:47.688760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb3500 is same with the state(6) to be set 00:28:19.804 [2024-11-26 07:37:47.688937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb3500 (9): Bad file descriptor 00:28:19.804 [2024-11-26 07:37:47.689125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:19.804 [2024-11-26 07:37:47.689135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:19.804 [2024-11-26 07:37:47.689143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:19.804 [2024-11-26 07:37:47.689151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:19.804 [2024-11-26 07:37:47.691195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.804 07:37:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 889610 00:28:19.804 [2024-11-26 07:37:47.701348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:19.804 4644.33 IOPS, 18.14 MiB/s [2024-11-26T06:37:47.904Z] [2024-11-26 07:37:47.858602] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:22.118 5468.14 IOPS, 21.36 MiB/s [2024-11-26T06:37:51.153Z] 6159.88 IOPS, 24.06 MiB/s [2024-11-26T06:37:52.089Z] 6704.11 IOPS, 26.19 MiB/s [2024-11-26T06:37:53.026Z] 7140.20 IOPS, 27.89 MiB/s [2024-11-26T06:37:54.022Z] 7497.82 IOPS, 29.29 MiB/s [2024-11-26T06:37:55.001Z] 7805.08 IOPS, 30.49 MiB/s [2024-11-26T06:37:55.939Z] 8059.15 IOPS, 31.48 MiB/s [2024-11-26T06:37:56.877Z] 8267.79 IOPS, 32.30 MiB/s [2024-11-26T06:37:56.877Z] 8456.33 IOPS, 33.03 MiB/s 00:28:28.777 Latency(us) 00:28:28.777 [2024-11-26T06:37:56.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.777 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:28.777 Verification LBA range: start 0x0 length 0x4000 00:28:28.777 Nvme1n1 : 15.01 8456.91 33.03 11237.85 0.00 6478.55 666.05 15386.71 00:28:28.777 [2024-11-26T06:37:56.877Z] =================================================================================================================== 00:28:28.777 [2024-11-26T06:37:56.877Z] Total : 8456.91 33.03 11237.85 0.00 6478.55 666.05 15386.71 00:28:29.036 07:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:29.036 07:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.036 07:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.036 07:37:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:29.036 rmmod nvme_tcp 00:28:29.036 rmmod nvme_fabrics 00:28:29.036 rmmod nvme_keyring 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 890700 ']' 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 890700 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 890700 ']' 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 890700 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 890700 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 890700' 00:28:29.036 killing process with pid 890700 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 890700 00:28:29.036 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 890700 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.295 07:37:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:31.834 00:28:31.834 real 0m25.372s 00:28:31.834 user 1m1.156s 00:28:31.834 sys 0m6.186s 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:31.834 ************************************ 00:28:31.834 END TEST nvmf_bdevperf 00:28:31.834 ************************************ 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.834 ************************************ 00:28:31.834 START TEST nvmf_target_disconnect 00:28:31.834 ************************************ 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:31.834 * Looking for test storage... 00:28:31.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:31.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.834 --rc genhtml_branch_coverage=1 00:28:31.834 --rc genhtml_function_coverage=1 00:28:31.834 --rc genhtml_legend=1 00:28:31.834 --rc geninfo_all_blocks=1 00:28:31.834 --rc geninfo_unexecuted_blocks=1 00:28:31.834 00:28:31.834 ' 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:31.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.834 --rc genhtml_branch_coverage=1 00:28:31.834 --rc genhtml_function_coverage=1 00:28:31.834 --rc genhtml_legend=1 00:28:31.834 --rc geninfo_all_blocks=1 00:28:31.834 --rc geninfo_unexecuted_blocks=1 00:28:31.834 00:28:31.834 ' 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:31.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.834 --rc genhtml_branch_coverage=1 00:28:31.834 --rc genhtml_function_coverage=1 00:28:31.834 --rc genhtml_legend=1 00:28:31.834 --rc geninfo_all_blocks=1 00:28:31.834 --rc geninfo_unexecuted_blocks=1 00:28:31.834 00:28:31.834 ' 00:28:31.834 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:31.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.834 --rc genhtml_branch_coverage=1 00:28:31.834 --rc genhtml_function_coverage=1 00:28:31.834 --rc genhtml_legend=1 00:28:31.834 --rc geninfo_all_blocks=1 00:28:31.834 --rc geninfo_unexecuted_blocks=1 00:28:31.834 00:28:31.835 ' 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:31.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.835 07:37:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:37.109 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.109 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:37.109 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:37.109 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:37.109 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:37.109 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:37.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:37.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:37.110 Found net devices under 0000:86:00.0: cvl_0_0 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:37.110 Found net devices under 0000:86:00.1: cvl_0_1 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.110 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.111 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.111 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:37.111 07:38:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:37.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:28:37.111 00:28:37.111 --- 10.0.0.2 ping statistics --- 00:28:37.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.111 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:28:37.111 00:28:37.111 --- 10.0.0.1 ping statistics --- 00:28:37.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.111 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:37.111 ************************************ 00:28:37.111 START TEST nvmf_target_disconnect_tc1 00:28:37.111 ************************************ 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:37.111 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:37.370 [2024-11-26 07:38:05.237021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.370 [2024-11-26 07:38:05.237069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1134ab0 with addr=10.0.0.2, port=4420 00:28:37.370 [2024-11-26 07:38:05.237089] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:37.370 [2024-11-26 07:38:05.237113] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:37.370 [2024-11-26 07:38:05.237123] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:37.370 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:37.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:37.370 Initializing NVMe Controllers 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.370 00:28:37.370 real 0m0.109s 00:28:37.370 user 0m0.045s 00:28:37.370 sys 0m0.063s 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.370 ************************************ 00:28:37.370 END TEST nvmf_target_disconnect_tc1 00:28:37.370 ************************************ 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:37.370 ************************************ 00:28:37.370 START TEST nvmf_target_disconnect_tc2 00:28:37.370 ************************************ 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=895701 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 895701 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 895701 ']' 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.370 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.370 [2024-11-26 07:38:05.374328] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:28:37.370 [2024-11-26 07:38:05.374369] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.370 [2024-11-26 07:38:05.452976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.630 [2024-11-26 07:38:05.495855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.630 [2024-11-26 07:38:05.495891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.630 [2024-11-26 07:38:05.495898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.630 [2024-11-26 07:38:05.495904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.630 [2024-11-26 07:38:05.495909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.630 [2024-11-26 07:38:05.497450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:37.630 [2024-11-26 07:38:05.497562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:37.630 [2024-11-26 07:38:05.497656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:37.630 [2024-11-26 07:38:05.497658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.630 Malloc0 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.630 [2024-11-26 07:38:05.661184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.630 [2024-11-26 07:38:05.693424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=895804 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:37.630 07:38:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:40.197 07:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 895701 00:28:40.197 07:38:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Write completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Write completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Write completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Write completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.197 Read completed with error (sct=0, sc=8) 00:28:40.197 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 [2024-11-26 07:38:07.721909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 [2024-11-26 07:38:07.722103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 [2024-11-26 07:38:07.722304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Write completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 Read completed with error (sct=0, sc=8) 00:28:40.198 starting I/O failed 00:28:40.198 [2024-11-26 07:38:07.722508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:40.198 [2024-11-26 07:38:07.722722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.198 [2024-11-26 07:38:07.722780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.723015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.723053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.723331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.723364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.723611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.723643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.723846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.723879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.723997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.724012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.724107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.724122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.724258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.724274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.724381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.724397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.724547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.724563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.724764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.724796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.724906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.724938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.725071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.725103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.725307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.725339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.725530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.725562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.725768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.725783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.725861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.725897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.726209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.726251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.726436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.726468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.726660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.726691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.726839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.726871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.727052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.727068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.727234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.727249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.727415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.727447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.727636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.727669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.727863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.727895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.728021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.728037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.728123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.728138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.728290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.728321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.728521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.728553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.728680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.728712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.728830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.728846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.728946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.728965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.729181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.729197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.729360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.729375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.729524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.729557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.729772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.729804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.729926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.199 [2024-11-26 07:38:07.729965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.199 qpair failed and we were unable to recover it. 00:28:40.199 [2024-11-26 07:38:07.730197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.730212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.730358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.730373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.730535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.730568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.730762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.730794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.730924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.730967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.731169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.731185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.731273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.731306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.731429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.731461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.731714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.731746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.731863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.731896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.732052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.732068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.732207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.732222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.732373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.732389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.732472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.732486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.732582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.732617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.732788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.732832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.733038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.733067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.733167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.733182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.733266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.733281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.733361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.733382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.733465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.733479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.733628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.733645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.733791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.733806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.733964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.733983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.734121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.734137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.734279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.734295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.734365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.734380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.734456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.734470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.734562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.734574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.734717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.734729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.734861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.734873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.735059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.735072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.735157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.735168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.735312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.735324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.735386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.735397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.735489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.735500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.735581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.735593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.735667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.200 [2024-11-26 07:38:07.735704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.200 qpair failed and we were unable to recover it. 00:28:40.200 [2024-11-26 07:38:07.735834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.735864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.735992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.736027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.736210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.736242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.736422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.736454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.736648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.736680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.736801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.736833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.737014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.737048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.737170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.737202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.737381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.737419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.737531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.737564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.737747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.737780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.737885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.737917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.738049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.738067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.738145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.738161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.738261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.738292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.738418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.738451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.738563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.738604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.738701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.738716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.738784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.738796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.738929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.738973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.739180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.739213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.739354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.739386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.739573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.739606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.739724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.739757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.739884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.739922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.740117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.740130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.740208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.740219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.740368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.740380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.740527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.740559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.740764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.740796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.740921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.740966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.741205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.741217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.741293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.741324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.741465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.741499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.741623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.741655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.741846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.741879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.742001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.742036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.201 [2024-11-26 07:38:07.742161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.201 [2024-11-26 07:38:07.742193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.201 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.742379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.742412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.742603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.742643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.742731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.742742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.742812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.742823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.742979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.743013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.743208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.743240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.743432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.743465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.743652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.743685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.743960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.743973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.744054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.744065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.744141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.744154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.744222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.744234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.744297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.744308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.744474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.744504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.744698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.744731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.744868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.744900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.745058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.745092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.745182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.745194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.745350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.745383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.745558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.745590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.745789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.745822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.745937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.745953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.746062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.746073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.746141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.746152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.746298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.746310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.746534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.746566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.746691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.746723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.746833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.746865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.747050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.202 [2024-11-26 07:38:07.747062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.202 qpair failed and we were unable to recover it. 00:28:40.202 [2024-11-26 07:38:07.747141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.747152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.747336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.747369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.747488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.747520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.747626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.747658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.747841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.747881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.748043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.748055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.748140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.748165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.748293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.748324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.748460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.748493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.748696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.748729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.748909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.748940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.749084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.749116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.749227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.749259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.749380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.749412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.749515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.749548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.749716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.749728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.749962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.749996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.750131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.750163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.750274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.750307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.750444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.750476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.750647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.750680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.750959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.751000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.751119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.751151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.751331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.751364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.751540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.751573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.751860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.751892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.752056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.752091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.752279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.752312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.752497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.752530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.752781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.752813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.753027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.753061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.753248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.753260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.753400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.753411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.753542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.753574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.753759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.753791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.754061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.754095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.754294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.754327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.754448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.203 [2024-11-26 07:38:07.754481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.203 qpair failed and we were unable to recover it. 00:28:40.203 [2024-11-26 07:38:07.754749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.754780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.755045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.755079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.755331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.755363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.755487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.755518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.755712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.755744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.755853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.755894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.756052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.756063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.756222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.756254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.756359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.756391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.756563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.756594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.756783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.756815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.757004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.757039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.757222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.757254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.757439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.757488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.757660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.757693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.757902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.757935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.758056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.758081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.758240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.758251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.758330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.758341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.758424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.758435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.758582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.758594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.758789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.758800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.758934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.758946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.759095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.759134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.759256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.759288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.759485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.759523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.759633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.759666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.759800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.759837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.759978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.759990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.760145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.760176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.760370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.760402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.760529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.760561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.760739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.760751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.760856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.760868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.761002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.761015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.761083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.761093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.761168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.761179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.204 [2024-11-26 07:38:07.761312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.204 [2024-11-26 07:38:07.761324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.204 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.761406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.761417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.761482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.761492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.761588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.761619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.761886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.761918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.762053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.762087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.762220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.762253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.762440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.762471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.762666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.762698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.762816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.762827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.763004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.763016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.763118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.763150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.763345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.763377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.763630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.763664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.763830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.763842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.763915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.763925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.764010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.764021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.764216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.764228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.764388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.764400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.764572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.764603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.764868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.764900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.765104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.765138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.765317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.765348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.765558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.765591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.765702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.765735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.765909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.765921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.766006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.766040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.766176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.766208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.766468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.766500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.766817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.766849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.767028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.767061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.767186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.767198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.767364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.767376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.767606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.767637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.767759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.767791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.768052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.768064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.768232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.768244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.768391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.768404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.205 qpair failed and we were unable to recover it. 00:28:40.205 [2024-11-26 07:38:07.768533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.205 [2024-11-26 07:38:07.768545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.768684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.768696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.768844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.768856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.768966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.768978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.769118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.769151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.769322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.769361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.769564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.769596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.769776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.769788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.769919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.769931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.770013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.770024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.770184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.770227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.770475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.770506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.770694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.770726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.770891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.770902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.771028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.771040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.771104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.771115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.771197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.771208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.771354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.771386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.771570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.771603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.771809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.771840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.772011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.772024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.772120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.772131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.772380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.772412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.772599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.772632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.772809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.772841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.773020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.773032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.773112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.773122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.773236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.773268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.773438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.773476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.773588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.773620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.773804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.773837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.774013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.774047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.774173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.774205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.774487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.774520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.774726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.774758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.774962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.774997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.775125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.775157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.775356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.775367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.206 qpair failed and we were unable to recover it. 00:28:40.206 [2024-11-26 07:38:07.775438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.206 [2024-11-26 07:38:07.775448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.775641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.775653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.775895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.775906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.775999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.776022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.776100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.776111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.776323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.776356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.776466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.776498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.776759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.776791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.776943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.776964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.777120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.777154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.777427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.777460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.777643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.777675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.777806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.777839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.777959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.777994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.778115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.778147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.778265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.778277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.778406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.778418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.778562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.778574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.778708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.778741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.778920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.778961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.779081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.779114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.779354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.779386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.779522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.779553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.779731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.779763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.779946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.779990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.780256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.780289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.780530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.780562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.780779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.780812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.780958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.780970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.781099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.781111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.781184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.781196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.781395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.781426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.781705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.781737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.207 [2024-11-26 07:38:07.781924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.207 [2024-11-26 07:38:07.781936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.207 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.782020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.782031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.782254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.782267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.782431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.782442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.782603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.782636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.782741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.782774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.782941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.782958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.783103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.783115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.783255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.783267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.783409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.783421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.783582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.783614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.783731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.783764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.783958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.783993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.784188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.784200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.784274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.784285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.784359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.784390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.784571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.784604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.784782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.784814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.785084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.785119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.785235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.785267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.785401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.785433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.785717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.785748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.785868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.785900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.786037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.786048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.786121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.786131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.786354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.786387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.786574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.786606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.786722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.786754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.786903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.786915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.786999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.787010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.787136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.787148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.787215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.787226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.787285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.787296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.787434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.787444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.787593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.787605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.787747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.787759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.787956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.787968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.788114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.788128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.208 [2024-11-26 07:38:07.788272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.208 [2024-11-26 07:38:07.788310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.208 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.788548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.788580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.788790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.788822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.788910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.788923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.789016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.789028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.789182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.789214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.789321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.789353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.789552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.789585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.789718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.789730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.789862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.789873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.790031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.790052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.790214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.790226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.790310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.790321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.790430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.790463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.790669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.790700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.790877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.790909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.791195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.791229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.791448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.791480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.791705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.791737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.791935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.791976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.792242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.792273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.792397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.792429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.792668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.792700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.792833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.792865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.793077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.793111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.793301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.793333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.793555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.793627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.793775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.793792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.794004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.794040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.794237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.794271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.794459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.794492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.794702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.794735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.794911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.794927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.795013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.795028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.795160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.795176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.795429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.795462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.795638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.795669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.795807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.795840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.209 [2024-11-26 07:38:07.796023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.209 [2024-11-26 07:38:07.796041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.209 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.796140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.796182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.796445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.796477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.796652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.796685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.796874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.796906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.797087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.797104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.797311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.797345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.797533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.797566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.797694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.797727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.797926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.797943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.798027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.798042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.798189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.798205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.798348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.798380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.798563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.798596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.798770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.798804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.798994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.799028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.799206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.799238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.799517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.799550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.799670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.799702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.799875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.799908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.800044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.800078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.800272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.800305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.800436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.800469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.800648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.800681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.800855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.800887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.801158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.801194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.801316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.801348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.801622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.801655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.801972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.802045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.802226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.802245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.802367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.802402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.802594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.802628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.802827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.802861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.803041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.803057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.803233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.803267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.803473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.803505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.803753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.803786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.803976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.804012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.210 [2024-11-26 07:38:07.804273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.210 [2024-11-26 07:38:07.804289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.210 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.804510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.804542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.804758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.804792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.805032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.805050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.805244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.805260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.805513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.805546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.805730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.805764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.805961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.805996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.806140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.806173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.806297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.806330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.806521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.806554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.806755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.806788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.807101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.807136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.807341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.807374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.807641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.807674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.807806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.807838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.807965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.807999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.808185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.808223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.808396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.808430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.808555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.808588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.808770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.808804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.808935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.808978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.809157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.809190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.809415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.809448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.809643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.809676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.809868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.809901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.810089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.810124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.810365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.810398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.810590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.810623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.810739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.810770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.811029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.811064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.811260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.811293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.811477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.811510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.811690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.811723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.811971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.211 [2024-11-26 07:38:07.812005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.211 qpair failed and we were unable to recover it. 00:28:40.211 [2024-11-26 07:38:07.812190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.812222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.812458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.812491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.812759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.812791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.812969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.813004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.813177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.813192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.813411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.813445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.813631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.813663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.813847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.813879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.814065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.814082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.814161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.814179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.814327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.814343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.814425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.814439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.814658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.814690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.814883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.814914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.815131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.815148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.815361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.815394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.815642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.815676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.815863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.815895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.816153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.816187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.816361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.816394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.816635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.816668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.816812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.816845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.816971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.817005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.817127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.817158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.817343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.817359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.817525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.817556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.817740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.817770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.817992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.818009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.818162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.818194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.818425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.818458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.818580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.818613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.818829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.818862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.819031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.819066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.819192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.819224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.819463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.819496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.819675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.819708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.819977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.820016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.820249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.820265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.212 [2024-11-26 07:38:07.820405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.212 [2024-11-26 07:38:07.820421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.212 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.820520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.820561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.820833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.820868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.821040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.821073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.821251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.821267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.821420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.821453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.821586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.821618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.821822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.821854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.821994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.822010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.822161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.822177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.822342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.822357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.822495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.822512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.822636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.822665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.822854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.822892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.823115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.823151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.823366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.823398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.823695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.823728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.823942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.823990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.824176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.824208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.824362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.824374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.824472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.824483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.824572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.824584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.824801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.824834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.825096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.825132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.825321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.825353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.825549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.825592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.825786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.825819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.825940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.825955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.826015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.826026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.826165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.826197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.826436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.826468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.826669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.826702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.826873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.826885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.827108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.827141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.827325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.827357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.827480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.827512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.827720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.827752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.827877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.827888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.213 [2024-11-26 07:38:07.827977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.213 [2024-11-26 07:38:07.827988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.213 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.828049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.828087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.828221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.828253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.828548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.828579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.828771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.828804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.828923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.828935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.829166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.829200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.829481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.829513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.829705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.829738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.830006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.830040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.830298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.830330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.830597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.830630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.830893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.830925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.831059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.831092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.831284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.831323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.831587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.831621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.831802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.831835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.832018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.832053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.832245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.832261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.832413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.832445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.832581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.832614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.832798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.832830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.833121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.833158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.833293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.833326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.833456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.833487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.833688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.833719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.833912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.833944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.834193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.834208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.834435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.834451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.834677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.834692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.834845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.834861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.835063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.835080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.835234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.835268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.835463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.835494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.835758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.835790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.835906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.835938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.836137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.836169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.836410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.214 [2024-11-26 07:38:07.836442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.214 qpair failed and we were unable to recover it. 00:28:40.214 [2024-11-26 07:38:07.836584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.836616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.836853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.836885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.837058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.837074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.837308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.837353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.837572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.837604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.837872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.837906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.838175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.838208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.838341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.838373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.838654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.838687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.838925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.838970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.839079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.839111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.839353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.839370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.839575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.839591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.839802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.839817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.840040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.840057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.840215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.840249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.840435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.840468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.840731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.840765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.841036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.841052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.841137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.841152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.841369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.841402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.841579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.841611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.841740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.841773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.841940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.841960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.842062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.842078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.842210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.842245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.842429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.842460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.842701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.842733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.842838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.842870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.843112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.843146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.843383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.843402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.843602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.843618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.843793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.843808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.843912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.843926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.844186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.844220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.844412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.844446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.844580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.844612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.844733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.844764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.215 qpair failed and we were unable to recover it. 00:28:40.215 [2024-11-26 07:38:07.844957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.215 [2024-11-26 07:38:07.844991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.845108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.845142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.845351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.845383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.845584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.845617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.845751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.845783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.846054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.846070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.846325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.846362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.846557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.846589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.846828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.846860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.847040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.847057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.847266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.847281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.847426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.847441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.847652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.847686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.847893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.847925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.848230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.848264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.848457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.848488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.848698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.848732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.848836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.848870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.849039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.849055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.849147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.849166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.849300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.849315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.849464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.849480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.849630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.849646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.849805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.849837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.849967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.850001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.850181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.850214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.850329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.850345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.850423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.850438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.850538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.850569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.850769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.850801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.850978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.851013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.851198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.851230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.851495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.851528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.851732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.851765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.216 [2024-11-26 07:38:07.851912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.216 [2024-11-26 07:38:07.851927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.216 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.852072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.852088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.852160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.852175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.852261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.852277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.852477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.852510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.852616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.852650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.852773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.852806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.852985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.853031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.853240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.853256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.853353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.853369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.853520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.853536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.853621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.853635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.853719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.853734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.853920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.853961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.854148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.854180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.854363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.854396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.854527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.854559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.854730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.854763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.854966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.855000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.855211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.855244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.855432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.855464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.855639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.855671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.855855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.855887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.856156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.856172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.856347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.856380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.856616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.856648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.856841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.856873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.857010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.857027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.857190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.857205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.857295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.857309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.857459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.857500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.857744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.857776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.857913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.857955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.858083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.858098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.858194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.858208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.858288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.858302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.858369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.858383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.858518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.858533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.217 qpair failed and we were unable to recover it. 00:28:40.217 [2024-11-26 07:38:07.858618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.217 [2024-11-26 07:38:07.858632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.858818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.858850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.859126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.859160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.859379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.859411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.859749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.859782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.859920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.859961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.860150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.860182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.860390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.860406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.860498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.860512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.860647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.860663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.860852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.860884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.861058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.861092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.861272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.861305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.861467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.861483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.861632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.861648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.861782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.861825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.862037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.862071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.862191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.862223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.862408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.862439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.862547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.862581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.862768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.862800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.862999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.863033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.863159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.863191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.863432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.863464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.863585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.863617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.863808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.863841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.863988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.864023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.864295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.864332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.864540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.864555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.864714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.864730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.864980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.865014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.865210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.865243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.865455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.865488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.865632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.865664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.865793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.865826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.866004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.866037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.866172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.866187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.866269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.866284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.218 [2024-11-26 07:38:07.866382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.218 [2024-11-26 07:38:07.866397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.218 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.866569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.866585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.866659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.866674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.866748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.866763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.866921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.866968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.867088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.867120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.867227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.867259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.867451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.867484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.867666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.867699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.867879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.867912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.868057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.868090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.868323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.868362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.868446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.868461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.868623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.868638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.868790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.868806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.868960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.868993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.869104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.869137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.869396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.869429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.869563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.869596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.869702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.869735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.869922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.869965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.870153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.870169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.870314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.870347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.870536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.870568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.870769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.870802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.871040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.871075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.871183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.871216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.871419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.871452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.871656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.871689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.871889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.871922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.872195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.872212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.872388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.872404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.872562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.872596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.872729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.872761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.872931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.872973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.873152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.873169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.873251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.873266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.873332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.873347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.873497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.219 [2024-11-26 07:38:07.873513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.219 qpair failed and we were unable to recover it. 00:28:40.219 [2024-11-26 07:38:07.873649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.873665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.873755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.873769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.873901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.873935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.874063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.874097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.874276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.874308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.874478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.874495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.874615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.874687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.874878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.874965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.875178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f32af0 is same with the state(6) to be set 00:28:40.220 [2024-11-26 07:38:07.875338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.875368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.875573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.875597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.875775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.875810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.875962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.875997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.876127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.876159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.876293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.876334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.876401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.876411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.876496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.876518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.876728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.876740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.876870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.876904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.877096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.877131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.877291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.877324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.877453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.877485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.877663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.877696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.877970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.878006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.878273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.878306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.878481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.878513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.878791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.878824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.879010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.879045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.879232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.879244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.879400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.879433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.879679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.879711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.879891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.879923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.880038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.880071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.880212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.880251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.880464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.220 [2024-11-26 07:38:07.880497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.220 qpair failed and we were unable to recover it. 00:28:40.220 [2024-11-26 07:38:07.880697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.880729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.880904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.880936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.881072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.881105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.881308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.881345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.881491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.881503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.881583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.881594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.881664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.881674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.881912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.881945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.882082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.882115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.882296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.882329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.882507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.882539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.882708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.882740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.882922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.882966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.883152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.883184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.883425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.883457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.883666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.883699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.883883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.883916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.884067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.884104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.884286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.884320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.884445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.884461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.884539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.884553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.884713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.884745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.884852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.884885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.885084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.885118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.885242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.885279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.885446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.885483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.885626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.885639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.885786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.885798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.886033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.886046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.886237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.886270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.886379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.886412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.886608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.886640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.886884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.886918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.887190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.887222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.887393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.887426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.887565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.887599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.887882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.887915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.888194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.221 [2024-11-26 07:38:07.888236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.221 qpair failed and we were unable to recover it. 00:28:40.221 [2024-11-26 07:38:07.888321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.888338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.888423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.888437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.888581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.888614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.888834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.888865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.889008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.889046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.889194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.889209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.889433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.889464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.889589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.889621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.889750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.889782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.889969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.890002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.890267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.890299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.890405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.890436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.890623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.890655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.890766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.890799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.891132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.891151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.891244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.891260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.891432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.891450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.891678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.891719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.891920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.891975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.892114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.892147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.892270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.892312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.892448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.892464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.892653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.892686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.892934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.892980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.893120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.893152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.893379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.893395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.893541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.893557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.893735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.893768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.893884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.893918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.894039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.894092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.894242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.894276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.894458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.894471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.894627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.894661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.894852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.894885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.895149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.895184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.895362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.895374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.895448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.895459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.895545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.222 [2024-11-26 07:38:07.895556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.222 qpair failed and we were unable to recover it. 00:28:40.222 [2024-11-26 07:38:07.895783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.895816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.896008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.896043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.896182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.896215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.896387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.896399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.896478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.896489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.896663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.896697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.896906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.896938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.897081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.897093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.897174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.897185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.897328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.897361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.897553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.897585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.897764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.897797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.898051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.898085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.898204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.898237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.898418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.898451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.898703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.898715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.898772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.898785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.898997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.899030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.899134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.899166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.899364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.899397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.899532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.899564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.899807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.899840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.900047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.900059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.900197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.900209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.900359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.900392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.900605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.900637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.900757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.900791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.900909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.900941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.901084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.901117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.901237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.901270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.901390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.901423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.901639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.901672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.901869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.901902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.902053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.902065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.902143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.902153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.902248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.902279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.902476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.902509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.902703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.223 [2024-11-26 07:38:07.902735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.223 qpair failed and we were unable to recover it. 00:28:40.223 [2024-11-26 07:38:07.902837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.902870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.902994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.903029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.903291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.903324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.903434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.903468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.903654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.903686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.903812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.903844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.904027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.904060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.904249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.904282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.904441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.904453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.904584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.904618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.904796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.904829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.905029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.905064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.905246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.905258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.905429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.905461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.905636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.905669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.905866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.905899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.906098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.906133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.906265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.906298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.906487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.906524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.906648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.906681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.906863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.906895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.907084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.907096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.907241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.907253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.907430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.907463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.907646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.907678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.907808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.907841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.908059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.908094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.908354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.908366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.908507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.908540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.908718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.908750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.908939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.908982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.909099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.909111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.909332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.909365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.909549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.909583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.909755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.909788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.910031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.910064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.910170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.224 [2024-11-26 07:38:07.910182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.224 qpair failed and we were unable to recover it. 00:28:40.224 [2024-11-26 07:38:07.910408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.910440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.910626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.910659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.910765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.910798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.910935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.910978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.911160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.911193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.911328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.911361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.911535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.911548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.911618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.911629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.911710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.911721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.911859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.911870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.912008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.912021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.912099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.912110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.912271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.912283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.912377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.912410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.912586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.912619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.912735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.912768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.912908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.912940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.913061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.913094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.913269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.913301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.913494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.913526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.913638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.913671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.913848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.913887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.914148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.914182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.914307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.914339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.914597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.914630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.914867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.914899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.915121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.915156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.915346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.915378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.915613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.915625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.915774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.915807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.916045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.916080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.916215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.916248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.916541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.916580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.225 [2024-11-26 07:38:07.916773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.225 [2024-11-26 07:38:07.916805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.225 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.916915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.916946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.917168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.917200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.917382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.917413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.917532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.917565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.917749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.917782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.917981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.918014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.918131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.918163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.918279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.918290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.918366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.918377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.918511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.918545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.918783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.918816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.919008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.919041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.919153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.919165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.919368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.919402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.919581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.919615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.919836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.919869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.919981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.920014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.920233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.920267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.920395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.920407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.920630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.920663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.920786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.920818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.921007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.921042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.921223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.921257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.921431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.921463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.921596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.921628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.921800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.921834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.922039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.922072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.922259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.922273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.922423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.922455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.922574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.922607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.922778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.922812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.923003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.923037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.923278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.923311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.923494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.923526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.923642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.923674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.923847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.923880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.923993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.924027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.226 [2024-11-26 07:38:07.924295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.226 [2024-11-26 07:38:07.924329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.226 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.924525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.924556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.924672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.924705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.924967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.925002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.925273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.925306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.925558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.925589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.925900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.925933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.926134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.926166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.926284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.926317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.926507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.926518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.926685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.926696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.926921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.926961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.927157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.927189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.927384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.927415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.927545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.927557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.927697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.927709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.927982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.928015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.928172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.928184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.928353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.928385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.928517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.928550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.928740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.928772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.928958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.928991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.929180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.929213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.929353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.929385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.929589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.929622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.929736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.929768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.929966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.929999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.930172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.930204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.930401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.930433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.930566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.930599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.930777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.930815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.931003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.931036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.931220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.931253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.931454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.931485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.931602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.931635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.931811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.931844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.932038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.932072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.932257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.932289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.227 qpair failed and we were unable to recover it. 00:28:40.227 [2024-11-26 07:38:07.932468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.227 [2024-11-26 07:38:07.932499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.932739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.932772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.932956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.932990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.933105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.933138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.933270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.933302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.933512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.933524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.933764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.933775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.933911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.933923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.934051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.934063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.934190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.934203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.934356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.934389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.934572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.934605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.934729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.934762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.934868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.934901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.935098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.935131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.935323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.935355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.935478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.935510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.935690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.935722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.935934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.935976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.936206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.936277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.936489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.936527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.936703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.936738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.936861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.936894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.937092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.937127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.937370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.937404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.937577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.937610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.937795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.937828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.938095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.938140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.938236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.938252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.938343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.938359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.938521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.938537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.938636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.938668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.938777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.938819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.939087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.939121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.939297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.939313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.939474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.939508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.939628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.939661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.939854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.228 [2024-11-26 07:38:07.939888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.228 qpair failed and we were unable to recover it. 00:28:40.228 [2024-11-26 07:38:07.940099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.940135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.940330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.940364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.940476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.940508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.940678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.940694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.940863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.940880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.941030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.941066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.941187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.941221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.941394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.941426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.941548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.941560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.941727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.941738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.941868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.941879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.942023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.942058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.942181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.942215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.942345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.942377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.942550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.942583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.942717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.942749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.942905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.942939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.943179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.943192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.943394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.943427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.943692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.943725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.943850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.943883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.944099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.944133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.944394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.944427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.944609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.944641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.944825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.944858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.945052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.945086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.945216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.945248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.945382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.945414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.945658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.945670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.945824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.945836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.945945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.945961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.946087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.946098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.946232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.946244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.946389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.946422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.946668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.229 [2024-11-26 07:38:07.946707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.229 qpair failed and we were unable to recover it. 00:28:40.229 [2024-11-26 07:38:07.946825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.946857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.947031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.947066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.947192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.947224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.947348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.947381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.947490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.947501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.947626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.947638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.947726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.947757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.947936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.947988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.948170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.948202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.948391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.948424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.948611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.948643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.948747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.948792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.948941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.948957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.949129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.949161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.949290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.949323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.949425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.949458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.949642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.949674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.949852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.949886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.950057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.950092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.950202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.950235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.950421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.950454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.950691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.950723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.950900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.950932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.951136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.951169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.951353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.951387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.951583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.951615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.951801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.951875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.952121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.952158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.952284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.952300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.952507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.952540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.952778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.952810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.953055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.953091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.953265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.953281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.953423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.953456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.953698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.953731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.953919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.953962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.954100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.230 [2024-11-26 07:38:07.954132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.230 qpair failed and we were unable to recover it. 00:28:40.230 [2024-11-26 07:38:07.954261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.954295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.954485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.954518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.954703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.954749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.954923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.954963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.955102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.955135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.955337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.955370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.955556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.955589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.955771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.955803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.955996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.956030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.956143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.956158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.956234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.956249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.956357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.956389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.956594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.956627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.956803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.956836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.957053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.957088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.957298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.957331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.957589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.957622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.957864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.957898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.958121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.958155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.958328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.958361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.958544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.958578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.958733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.958748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.958826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.958840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.959000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.959017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.959130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.959163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.959337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.959370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.959556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.959590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.959733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.959765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.959882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.959915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.960188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.960261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.960461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.960499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.960734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.960750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.960892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.960909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.961056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.961091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.961205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.961239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.961465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.961482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.961718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.961734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.231 [2024-11-26 07:38:07.961833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.231 [2024-11-26 07:38:07.961848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.231 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.961935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.961955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.962106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.962122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.962279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.962313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.962504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.962537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.962731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.962775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.963022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.963056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.963186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.963219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.963410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.963442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.963633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.963667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.963854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.963887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.964020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.964055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.964172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.964205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.964458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.964474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.964577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.964594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.964662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.964677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.964861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.964895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.965033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.965067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.965198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.965230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.965438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.965454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.965526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.965541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.965730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.965762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.965887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.965920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.966181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.966254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.966524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.966561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.966700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.966717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.966808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.966823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.966985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.967020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.967129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.967170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.967244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.967259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.967411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.967428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.967627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.967643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.967848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.967869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.967968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.967985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.968198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.968232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.968496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.968529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.968670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.968703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.968828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.968861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.232 [2024-11-26 07:38:07.968979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.232 [2024-11-26 07:38:07.969013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.232 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.969133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.969167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.969357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.969391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.969632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.969666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.969802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.969837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.970033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.970068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.970317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.970351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.970461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.970477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.970662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.970695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.970817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.970850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.971832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.971863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.972028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.972048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.972218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.972233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.972403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.972418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.972516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.972530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.972671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.972686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.972843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.972860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.972956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.972972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.973119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.973135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.973236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.973252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.973328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.973344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.973482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.973502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.973644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.973661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.973729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.973744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.973887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.973903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.973984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.974000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.974140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.974157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.974304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.974321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.974407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.974422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.974508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.974522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.974771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.974787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.974876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.974891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.975033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.975050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.975154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.975169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.975255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.975271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.975360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.975375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.975518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.975534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.975604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.233 [2024-11-26 07:38:07.975619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.233 qpair failed and we were unable to recover it. 00:28:40.233 [2024-11-26 07:38:07.975696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.975712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.975864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.975881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.976031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.976048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.976116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.976132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.976314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.976331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.976408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.976423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.976571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.976588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.976686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.976702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.976803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.976817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.976896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.976911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.977007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.977023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.977183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.977200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.977346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.977362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.977435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.977450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.977597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.977614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.977691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.977707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.977854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.977870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.977959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.977975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.978128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.978144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.978301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.978319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.978473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.978490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.978624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.978641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.978725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.978740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.978841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.978856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.978976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.979014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.979101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.979120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.979223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.979240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.979379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.979396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.979488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.979503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.979586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.979602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.979689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.979704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.979850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.979867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.979960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.979976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.980113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.980128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.980202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.234 [2024-11-26 07:38:07.980218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.234 qpair failed and we were unable to recover it. 00:28:40.234 [2024-11-26 07:38:07.980298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.980313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.980461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.980478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.980567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.980615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.980805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.980837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.981020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.981056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.981242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.981279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.981470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.981486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.981577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.981592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.981740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.981774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.981907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.981941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.982201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.982238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.982369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.982402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.982527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.982544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.982694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.982710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.982902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.982935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.983075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.983108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.983375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.983410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.983520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.983554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.983681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.983714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.983909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.983941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.984169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.984204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.984337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.984370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.984496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.984530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.984779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.984814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.985002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.985035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.985216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.985249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.985357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.985373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.985583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.985617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.985733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.985766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.985967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.986009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.986221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.986255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.986382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.986415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.986550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.986582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.986767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.986783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.986877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.986894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.987029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.987046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.987187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.987220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.987394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.987426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.987562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.987595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.987726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.987760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.987879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.987911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.235 qpair failed and we were unable to recover it. 00:28:40.235 [2024-11-26 07:38:07.988062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.235 [2024-11-26 07:38:07.988096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.988314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.988357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.988464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.988497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.988684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.988717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.988837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.988869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.988982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.989016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.989197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.989230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.989409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.989442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.989653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.989684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.989814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.989846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.989990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.990024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.990219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.990250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.990426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.990459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.990631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.990647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.990722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.990736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.990912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.990928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.991079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.991111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.991234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.991265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.991457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.991490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.991664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.991695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.991882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.991916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.992159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.992196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.992444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.992477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.992714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.992730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.992820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.992836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.992925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.992941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.993046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.993063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.993217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.993232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.993330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.993349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.993429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.993443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.993541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.993557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.993638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.993654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.993868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.993901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.994121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.994157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.994334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.994367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.994494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.994511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.994603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.994620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.994705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.994720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.994932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.994954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.995023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.995039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.995126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.995141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.236 qpair failed and we were unable to recover it. 00:28:40.236 [2024-11-26 07:38:07.995226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.236 [2024-11-26 07:38:07.995240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.995324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.995341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.995421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.995436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.995598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.995633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.995814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.995846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.995990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.996027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.996200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.996234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.996352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.996387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.996498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.996531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.996709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.996742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.996944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.996989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.997176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.997211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.997380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.997396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.997474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.997489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.997739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.997755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.997841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.997856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.997955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.997971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.998063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.998086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.998256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.998289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.998422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.998454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.998566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.998599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.998732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.998765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.998969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.999004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.999118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.999150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.999342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.999377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.999566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.999600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:07.999792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:07.999826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.000014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.000056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.000248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.000281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.000413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.000445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.000649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.000666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.000808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.000825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.000971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.000987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.001156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.001190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.001369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.001404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.001575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.001591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.001746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.001779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.001905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.001940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.002156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.002192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.002307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.002323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.002460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.002476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.237 qpair failed and we were unable to recover it. 00:28:40.237 [2024-11-26 07:38:08.002574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.237 [2024-11-26 07:38:08.002590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.002762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.002795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.002934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.002980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.003162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.003195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.003391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.003409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.003575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.003608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.003734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.003765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.003894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.003927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.004128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.004162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.004270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.004303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.004419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.004454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.004662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.004696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.004889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.004924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.005147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.005182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.005390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.005406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.005552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.005585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.005773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.005807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.005980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.006017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.006129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.006164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.006339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.006371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.006556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.006589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.006714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.006730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.006893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.006924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.007043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.007081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.007188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.007219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.007392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.007425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.007540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.007559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.007743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.007776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.007980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.008016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.008195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.008230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.008417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.008433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.008599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.008632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.008741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.008775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.008968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.009004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.009129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.009161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.009267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.009302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.009492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.238 [2024-11-26 07:38:08.009526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.238 qpair failed and we were unable to recover it. 00:28:40.238 [2024-11-26 07:38:08.009697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.009730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.009911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.009944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.010146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.010178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.010321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.010338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.010430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.010445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.010510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.010525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.010676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.010692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.010836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.010853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.010933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.010953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.011092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.011110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.011183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.011198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.011352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.011368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.011436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.011452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.011647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.011663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.011828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.011843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.011957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.011991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.012203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.012237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.012427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.012460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.012663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.012679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.012780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.012812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.012939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.012983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.013228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.013261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.013382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.013398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.013467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.013483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.013689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.013723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.013922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.013964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.014084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.014116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.014256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.014289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.014409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.014444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.014628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.014667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.014853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.014885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.015062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.015095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.015211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.015245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.015409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.015425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.015661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.015692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.015898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.015932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.016101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.016134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.016327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.016360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.016537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.016569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.016688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.016720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.017012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.017047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.017244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.017278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.017467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.017484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.017636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.239 [2024-11-26 07:38:08.017651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.239 qpair failed and we were unable to recover it. 00:28:40.239 [2024-11-26 07:38:08.017814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.017832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.017980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.017996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.018148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.018164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.018329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.018363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.018542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.018576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.018727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.018761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.018959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.018994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.019185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.019218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.019348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.019381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.019562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.019579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.019742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.019775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.019919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.019961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.020187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.020221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.020352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.020369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.020447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.020462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.020616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.020633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.020710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.020725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.020864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.020897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.021112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.021147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.021269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.021303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.021425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.021440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.021526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.021541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.021627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.021642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.021789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.021805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.021909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.021925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.022133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.022153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.022296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.022312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.022404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.022419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.022512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.022527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.022596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.022612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.022700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.022715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.022928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.022973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.023112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.023144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.023348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.023382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.023566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.023581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.023719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.023735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.023997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.024032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.024210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.024243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.024416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.024432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.024606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.024622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.024706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.024720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.024880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.024897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.025051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.025069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.025149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.240 [2024-11-26 07:38:08.025163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.240 qpair failed and we were unable to recover it. 00:28:40.240 [2024-11-26 07:38:08.025320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.025336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.025593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.025626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.025812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.025845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.026027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.026061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.026242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.026276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.026473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.026506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.026641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.026673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.026848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.026881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.027091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.027126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.027389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.027421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.027600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.027615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.027786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.027820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.027998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.028032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.028206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.028238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.028406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.028423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.028584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.028601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.028691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.028705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.028868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.028884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.029063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.029098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.029288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.029322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.029431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.029465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.029606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.029624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.029702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.029717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.029965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.030000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.030185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.030219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.030355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.030387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.030565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.030598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.030868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.030899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.031104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.031138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.031273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.031305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.031416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.031448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.031579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.031611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.031803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.031836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.031995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.032029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.032244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.032278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.032470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.032486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.032635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.032651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.032740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.032754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.032910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.032926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.033012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.033027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.033236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.033253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.033484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.033500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.033581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.033595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.033689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.241 [2024-11-26 07:38:08.033705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.241 qpair failed and we were unable to recover it. 00:28:40.241 [2024-11-26 07:38:08.033794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.033809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.033958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.034002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.034125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.034158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.034333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.034368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.034490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.034522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.034614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.034630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.034717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.034732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.034802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.034817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.034907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.034923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.035103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.035137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.035334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.035365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.035489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.035521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.035651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.035685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.035795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.035811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.035959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.035976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.036114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.036129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.036210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.036226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.036406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.036426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.036607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.036641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.036818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.036849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.037048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.037083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.037288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.037305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.037405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.037421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.037509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.037524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.037670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.037685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.037819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.037835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.037921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.037936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.038035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.038049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.038255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.038272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.038422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.038438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.038521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.038536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.038675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.038692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.038829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.038845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.039022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.039056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.039175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.039207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.039327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.039358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.039493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.039527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.039634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.039666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.039784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.039800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.039978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.039995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.040074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.040090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.242 [2024-11-26 07:38:08.040169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.242 [2024-11-26 07:38:08.040184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.242 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.040254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.040269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.040449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.040465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.040617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.040650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.040753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.040784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.040983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.041016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.041126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.041158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.041284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.041316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.041439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.041472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.041642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.041677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.041844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.041860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.042013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.042046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.042313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.042345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.042485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.042517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.042711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.042745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.042856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.042871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.043086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.043105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.043211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.043227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.043386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.043421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.043631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.043664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.043933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.044000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.044243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.044277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.044416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.044449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.044637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.044671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.044796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.044828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.045021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.045058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.045242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.045275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.045477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.045493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.045574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.045608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.045821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.045855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.046060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.046093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.046229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.046261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.243 [2024-11-26 07:38:08.046458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.243 [2024-11-26 07:38:08.046491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.243 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.046697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.046741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.046892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.046908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.046989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.047004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.047143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.047159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.047295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.047311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.047384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.047399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.047621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.047655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.047829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.047861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.048145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.048181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.048301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.048334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.048491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.048563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.048783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.048825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.049084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.049104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.049258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.049275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.049351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.049365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.049534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.049551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.049709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.049743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.049924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.049969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.050151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.050184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.050402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.050437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.050555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.050588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.050788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.050821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.051030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.051065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.051183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.051215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.051518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.051551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.051788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.051822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.051969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.052004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.052202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.052236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.052508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.052542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.052766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.052798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.052978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.053012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.053206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.053239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.053452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.053486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.053619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.053653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.053851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.053883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.054072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.054107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.244 [2024-11-26 07:38:08.054238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.244 [2024-11-26 07:38:08.054271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.244 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.054402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.054440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.054539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.054555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.054631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.054646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.054856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.054892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.055027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.055060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.055253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.055287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.055466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.055500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.055688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.055704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.055876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.055909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.056118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.056152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.056290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.056322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.056598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.056632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.056752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.056795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.056953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.056971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.057120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.057153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.057334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.057367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.057546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.057579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.057712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.057728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.057825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.057841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.057978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.057994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.058086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.058100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.058203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.058220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.058388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.058422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.058539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.058572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.058711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.058744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.058923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.058964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.059074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.059108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.059411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.059445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.059556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.059590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.059718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.059750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.059999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.060033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.060152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.060185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.060382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.060416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.060597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.060635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.060771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.060786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.060932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.060979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.061159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.245 [2024-11-26 07:38:08.061194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.245 qpair failed and we were unable to recover it. 00:28:40.245 [2024-11-26 07:38:08.061379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.061412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.061551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.061584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.061705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.061738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.061865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.061904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.062032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.062065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.062183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.062215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.062429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.062463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.062595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.062629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.062743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.062776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.062979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.062997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.063072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.063087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.063317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.063334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.063414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.063429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.063513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.063527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.063670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.063684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.063789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.063805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.063905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.063923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.064032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.064048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.064197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.064212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.064339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.064371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.064515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.064548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.064676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.064708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.064845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.064861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.065021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.065039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.065193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.065209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.065298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.065314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.065450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.065466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.065544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.065558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.065639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.065654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.065802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.065819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.065908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.065923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.066018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.066033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.066133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.066164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.066340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.066371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.066499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.066532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.066638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.066669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.066808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.066824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.066915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.246 [2024-11-26 07:38:08.066930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.246 qpair failed and we were unable to recover it. 00:28:40.246 [2024-11-26 07:38:08.067027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.067049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.067147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.067163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.067307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.067324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.067480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.067497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.067576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.067591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.067663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.067682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.067789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.067821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.067972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.068011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.068167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.068203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.068335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.068373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.068470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.068488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.068565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.068579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.068662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.068678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.068771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.068786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.068935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.068958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.069037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.069052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.069242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.069275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.069386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.069418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.069608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.069624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.069832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.069849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.070036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.070053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.070278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.070295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.070524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.070541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.070720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.070736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.070842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.070875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.071012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.071048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.071250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.071284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.071483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.071516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.071639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.071672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.071790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.071823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.071944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.071965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.072103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.247 [2024-11-26 07:38:08.072119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.247 qpair failed and we were unable to recover it. 00:28:40.247 [2024-11-26 07:38:08.072206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.072225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.072329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.072346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.072499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.072516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.072588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.072602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.072763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.072780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.072876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.072893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.072977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.072992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.073080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.073095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.073185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.073217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.073350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.073389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.073677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.073721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.073904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.073921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.074087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.074104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.074175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.074191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.074338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.074354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.074547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.074563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.074648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.074663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.074820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.074854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.075043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.075081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.075200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.075234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.075407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.075442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.075649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.075664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.075756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.075770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.075846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.075861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.075936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.075959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.076117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.076153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.076332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.076365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.076480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.076520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.076656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.076672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.076749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.076765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.076869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.076902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.077090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.077122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.077375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.077410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.077524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.077540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.077714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.077752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.077944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.077987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.078244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.078277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.078516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.078550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.248 [2024-11-26 07:38:08.078728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.248 [2024-11-26 07:38:08.078761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.248 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.079036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.079053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.079145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.079160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.079247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.079262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.079418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.079452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.079647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.079680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.079858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.079891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.080085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.080119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.080308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.080342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.080516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.080547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.080738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.080776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.080919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.080934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.081176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.081194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.081352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.081368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.081599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.081615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.081690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.081704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.081865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.081881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.081982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.081998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.082086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.082102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.082312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.082346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.082604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.082637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.082749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.082782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.082902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.082934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.083146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.083181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.083290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.083322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.083505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.083537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.083658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.083690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.083819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.083851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.084028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.084044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.084200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.084233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.084422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.084441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.084592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.084608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.084748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.084763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.084845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.084863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.084932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.084950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.085042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.085059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.085213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.085247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.085440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.085473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.085711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.249 [2024-11-26 07:38:08.085754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.249 qpair failed and we were unable to recover it. 00:28:40.249 [2024-11-26 07:38:08.085859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.085876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.086031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.086065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.086278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.086311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.086504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.086536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.086671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.086687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.086784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.086800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.086990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.087025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.087265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.087298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.087507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.087540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.087796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.087829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.088094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.088129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.088316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.088348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.088455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.088488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.088605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.088637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.088824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.088859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.089040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.089057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.089201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.089235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.089409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.089441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.089700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.089738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.089937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.089978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.090099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.090132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.090261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.090294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.090414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.090447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.090724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.090756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.090868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.090884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.090994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.091010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.091105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.091121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.091268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.091313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.091501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.091535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.091779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.091811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.091986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.092003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.092072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.092087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.092180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.092195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.092280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.092296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.092467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.092484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.092555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.092598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.092732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.092763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.250 qpair failed and we were unable to recover it. 00:28:40.250 [2024-11-26 07:38:08.092874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.250 [2024-11-26 07:38:08.092906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.093023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.093057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.093254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.093285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.093454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.093486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.093667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.093701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.093868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.093885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.094036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.094052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.094203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.094221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.094372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.094391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.094491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.094507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.094675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.094690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.094844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.094877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.095120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.095155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.095280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.095313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.095440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.095473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.095647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.095680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.095794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.095826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.095999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.096034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.096255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.096286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.096478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.096511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.096620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.096651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.096837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.096853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.097042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.097079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.097209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.097243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.097432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.097466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.097675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.097709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.097888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.097920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.098148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.098181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.098366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.098397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.098579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.098615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.098851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.098884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.099070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.099105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.099297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.099330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.099506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.099540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.099676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.099708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.099877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.099911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.100101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.100136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.100261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.100297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.251 qpair failed and we were unable to recover it. 00:28:40.251 [2024-11-26 07:38:08.100511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.251 [2024-11-26 07:38:08.100545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.100732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.100765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.100885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.100916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.101127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.101161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.101296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.101329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.101530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.101562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.101686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.101719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.101902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.101934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.102078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.102133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.102327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.102361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.102484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.102517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.102663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.102702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.102830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.102861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.103067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.103084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.103158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.103174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.103258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.103272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.103428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.103444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.103548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.103564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.103642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.103657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.103811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.103826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.103914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.103970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.104147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.104179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.104304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.104336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.104520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.104554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.104681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.104698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.104878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.104912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.105111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.105145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.105329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.105361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.105479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.105513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.105806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.105839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.105972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.106008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.106140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.106173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.106426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.106459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.106593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.106636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.106724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.252 [2024-11-26 07:38:08.106740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.252 qpair failed and we were unable to recover it. 00:28:40.252 [2024-11-26 07:38:08.107003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.107020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.107098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.107113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.107215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.107230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.107315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.107336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.107435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.107451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.107667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.107701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.107990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.108025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.108166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.108201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.108329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.108363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.108553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.108586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.108805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.108840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.109020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.109055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.109190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.109227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.109366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.109399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.109580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.109613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.109793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.109809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.110021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.110057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.110254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.110289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.110416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.110453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.110587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.110621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.110819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.110854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.110978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.111015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.111188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.111222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.111403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.111436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.111624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.111641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.111879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.111896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.112067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.112084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.112171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.112186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.112308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.112340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.112531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.112565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.112750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.112792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.112925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.112940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.113103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.113137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.113323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.113355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.113549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.113583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.253 qpair failed and we were unable to recover it. 00:28:40.253 [2024-11-26 07:38:08.113767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.253 [2024-11-26 07:38:08.113802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.113976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.114010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.114254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.114286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.114417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.114449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.114643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.114677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.114779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.114795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.114964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.114987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.115173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.115191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.115332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.115348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.115536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.115606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.115823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.115859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.116061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.116097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.116282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.116317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.116452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.116492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.116771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.116805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.117011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.117023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.117205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.117217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.117321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.117354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.117561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.117596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.117785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.117819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.118005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.118041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.118230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.118264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.118463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.118506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.118645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.118657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.118825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.118838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.119045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.119079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.119203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.119237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.119442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.119475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.119747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.119780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.119910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.119942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.120131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.120143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.120279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.120290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.120373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.120385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.120460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.120470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.120560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.120572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.120702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.120714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.120850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.120862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.121004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.254 [2024-11-26 07:38:08.121016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.254 qpair failed and we were unable to recover it. 00:28:40.254 [2024-11-26 07:38:08.121092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.121104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.121299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.121311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.121478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.121490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.121596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.121630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.121753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.121790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.121987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.122021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.122188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.122260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.122496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.122533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.122658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.122691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.122791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.122807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.122904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.122920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.123078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.123116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.123291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.123336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.123521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.123554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.123689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.123723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.123838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.123850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.124012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.124025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.124156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.124168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.124233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.124243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.124378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.124389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.124527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.124563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.124691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.124726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.124848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.124880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.125014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.125050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.125246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.125286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.125531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.125565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.125699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.125732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.125848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.125881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.126113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.126147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.126323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.126357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.126487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.126519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.126652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.126686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.126895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.126928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.127116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.127150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.127365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.127398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.127587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.127621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.127860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.127893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.255 qpair failed and we were unable to recover it. 00:28:40.255 [2024-11-26 07:38:08.127991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.255 [2024-11-26 07:38:08.128003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.128235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.128269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.128473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.128506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.128635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.128669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.128776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.128789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.128848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.128859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.129014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.129027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.129171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.129183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.129265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.129276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.129554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.129586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.129716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.129749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.129943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.129988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.130302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.130335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.130531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.130564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.130703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.130749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.130904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.130941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.131143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.131182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.131369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.131402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.131540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.131556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.131759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.131776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.131852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.131867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.132020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.132055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.132321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.132355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.132605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.132639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.132748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.132764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.132850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.132864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.132960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.132976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.133090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.133108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.133257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.133273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.133355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.133368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.133510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.133550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.133687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.133721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.133850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.133885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.134067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.134084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.134293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.134310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.134458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.134474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.134587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.134619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.134741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.134775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.256 qpair failed and we were unable to recover it. 00:28:40.256 [2024-11-26 07:38:08.134957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.256 [2024-11-26 07:38:08.134993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.135142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.135176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.135299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.135332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.135480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.135516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.135719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.135735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.135880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.135896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.135990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.136029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.136206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.136239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.136366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.136398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.136673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.136707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.136837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.136870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.137055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.137090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.137306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.137339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.137605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.137639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.137913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.137946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.138083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.138117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.138263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.138303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.138522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.138554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.138678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.138712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.138833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.138866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.139039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.139056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.139205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.139239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.139370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.139403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.139581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.139616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.139811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.139844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.139980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.140015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.140145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.140179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.140390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.140422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.140539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.140573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.140759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.140792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.140933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.140963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.141133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.141166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.141276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.141309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.141494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.257 [2024-11-26 07:38:08.141528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.257 qpair failed and we were unable to recover it. 00:28:40.257 [2024-11-26 07:38:08.141721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.141754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.141945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.141991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.142141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.142175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.142364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.142398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.142523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.142556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.142678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.142712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.142972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.143008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.143130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.143163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.143361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.143395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.143543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.143583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.143786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.143820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.143998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.144032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.144213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.144247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.144465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.144499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.144685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.144719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.144918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.144930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.145009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.145021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.145084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.145095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.145168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.145179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.145254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.145265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.145408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.145419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.145553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.145564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.145662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.145697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.145824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.145858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.145990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.146024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.146145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.146178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.146353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.146386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.146583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.146616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.146733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.146765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.146972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.147009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.147193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.147227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.147404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.147437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.147622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.147656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.147940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.147984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.148221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.148237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.148384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.148400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.148552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.148569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.148721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.258 [2024-11-26 07:38:08.148738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.258 qpair failed and we were unable to recover it. 00:28:40.258 [2024-11-26 07:38:08.148818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.148848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.149037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.149070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.149188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.149224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.149418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.149452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.149576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.149609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.149819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.149850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.150082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.150097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.150297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.150309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.150480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.150512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.150696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.150730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.150860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.150892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.151044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.151062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.151201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.151221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.151315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.151326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.151484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.151496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.151586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.151597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.151691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.151718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.151813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.151825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.151978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.151991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.152149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.152161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.152235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.152267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.152564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.152596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.152790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.152834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.152919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.152929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.153076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.153089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.153155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.153167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.153255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.153266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.153361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.153372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.153452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.153464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.153668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.153700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.153875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.153908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.154093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.154129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.154319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.154353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.154461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.154494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.154731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.154743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.154825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.154836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.154900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.154912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.155067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.259 [2024-11-26 07:38:08.155102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.259 qpair failed and we were unable to recover it. 00:28:40.259 [2024-11-26 07:38:08.155235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.155269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.155380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.155413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.155597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.155631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.155873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.155906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.156050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.156063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.156232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.156266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.156478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.156511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.156687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.156720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.156892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.156926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.157131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.157143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.157281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.157293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.157371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.157383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.157524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.157536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.157636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.157674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.157784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.157819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.158009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.158044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.158249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.158282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.158483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.158517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.158625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.158659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.158835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.158868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.159053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.159087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.159268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.159302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.159485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.159518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.159725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.159758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.159964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.159998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.160138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.160179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.160311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.160323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.160530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.160562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.160761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.160795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.160912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.160946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.161108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.161120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.161277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.161308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.161583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.161616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.161748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.161781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.161919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.161931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.162020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.162032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.162207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.162240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.162416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.260 [2024-11-26 07:38:08.162449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.260 qpair failed and we were unable to recover it. 00:28:40.260 [2024-11-26 07:38:08.162561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.162595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.162780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.162813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.163009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.163048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.163239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.163256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.163358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.163374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.163446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.163461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.163628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.163660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.163784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.163816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.163927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.163970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.164103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.164136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.164270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.164302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.164513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.164546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.164667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.164705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.164850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.164867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.164961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.164977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.165119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.165136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.165342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.165358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.165441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.165456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.165597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.165612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.165711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.165727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.165809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.165823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.165957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.165983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.166054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.166066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.166153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.166163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.166299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.166312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.166444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.166477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.166603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.166636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.166749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.166783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.166973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.167010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.167140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.167173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.167319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.167353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.167476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.167511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.167637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.167670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.167858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.167892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.168102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.168115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.168282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.168319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.168469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.168504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.168716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.261 [2024-11-26 07:38:08.168750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.261 qpair failed and we were unable to recover it. 00:28:40.261 [2024-11-26 07:38:08.168939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.168981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.169234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.169267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.169406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.169441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.169734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.169768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.169954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.169969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.170058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.170069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.170163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.170195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.170394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.170428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.170629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.170662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.170786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.170799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.171004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.171040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.171178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.171212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.171330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.171363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.171581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.171615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.171742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.171775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.171915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.171959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.172138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.172151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.172350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.172385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.172526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.172560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.172750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.172784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.172982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.172996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.173158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.173191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.173321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.173355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.173487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.173522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.173766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.173800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.173995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.174031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.174326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.174360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.174478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.174511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.174631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.174665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.174846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.174879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.175061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.175073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.175163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.175175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.175317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.175329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.262 qpair failed and we were unable to recover it. 00:28:40.262 [2024-11-26 07:38:08.175468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.262 [2024-11-26 07:38:08.175502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.175629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.175662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.175772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.175806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.175935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.175986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.176060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.176071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.176199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.176211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.176357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.176370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.176530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.176568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.176761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.176774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.176846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.176859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.176954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.176989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.177220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.177261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.177434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.177469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.177710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.177744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.177919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.177932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.178014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.178026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.178225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.178237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.178312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.178323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.178412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.178423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.178582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.178614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.178802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.178837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.179072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.179106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.179237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.179271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.179521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.179558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.179825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.179866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.180061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.180096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.180283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.180317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.180517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.180550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.180794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.180830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.180966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.181001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.181245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.181281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.181397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.181431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.181604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.181638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.181904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.181937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.182125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.182162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.182342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.182374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.182481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.182516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.182652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.263 [2024-11-26 07:38:08.182686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.263 qpair failed and we were unable to recover it. 00:28:40.263 [2024-11-26 07:38:08.182978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.183052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.183212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.183231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.183440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.183475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.183661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.183694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.183818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.183852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.183963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.183981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.184079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.184096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.184231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.184246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.184328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.184343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.184420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.184434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.184519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.184534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.184624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.184657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.184843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.184876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.185055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.185098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.185360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.185376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.185456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.185472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.185567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.185581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.185748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.185780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.185918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.185963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.186092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.186125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.186255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.186288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.186414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.186448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.186630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.186663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.186845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.186878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.187046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.187064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.187144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.187159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.187314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.187330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.187518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.187559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.187704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.187743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.187922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.187965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.188144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.188157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.188299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.188311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.188395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.188406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.188609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.188641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.188770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.188806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.188943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.188990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.189098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.189110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.264 [2024-11-26 07:38:08.189175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.264 [2024-11-26 07:38:08.189186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.264 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.189336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.189370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.189496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.189529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.189820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.189892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.190007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.190026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.190248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.190282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.190406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.190439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.190582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.190615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.190750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.190783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.190959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.190975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.191067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.191113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.191294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.191327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.191510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.191542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.191683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.191716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.191844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.191878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.192005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.192040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.192223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.192257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.192395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.192430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.192683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.192725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.192979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.193017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.193303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.193339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.193520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.193553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.193737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.193771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.194010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.194027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.194274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.194290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.194545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.194583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.194768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.194803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.194927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.194971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.195234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.195267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.195461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.195494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.195706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.195748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.196039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.196074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.196251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.196284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.196412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.196446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.196639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.196673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.196952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.196968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.197129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.197146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.197241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.197259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.265 [2024-11-26 07:38:08.197398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.265 [2024-11-26 07:38:08.197414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.265 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.197622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.197656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.197833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.197867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.198055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.198090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.198381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.198419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.198679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.198712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.198966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.199010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.199142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.199156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.199353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.199375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.199530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.199564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.199754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.199788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.200004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.200017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.200192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.200225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.200415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.200448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.200688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.200723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.200966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.200978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.201063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.201108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.201216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.201249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.201497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.201530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.201863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.201905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.202098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.202115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.202368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.202402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.202628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.202661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.202915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.202969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.203226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.203260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.203542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.203577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.203850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.203884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.204088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.204122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.204246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.204280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.204541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.204575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.204781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.204815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.204941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.204990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.205231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.205265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.205455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.266 [2024-11-26 07:38:08.205488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.266 qpair failed and we were unable to recover it. 00:28:40.266 [2024-11-26 07:38:08.205631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.205664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.205775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.205807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.205938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.205982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.206195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.206211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.206369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.206402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.206587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.206619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.206878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.206912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.207205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.207222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.207315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.207330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.207495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.207511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.207721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.207753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.208029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.208065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.208339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.208361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.208518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.208533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.208694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.208711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.208864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.208896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.209090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.209123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.209416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.209449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.209566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.209599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.209786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.209817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.210002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.210019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.210109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.210123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.210284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.210301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.210466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.210498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.210633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.210667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.210913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.210945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.211164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.211197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.211394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.211427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.211691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.211737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.211967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.211985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.212202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.212218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.212442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.212458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.212545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.212560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.212720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.212736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.212880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.212896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.213101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.213133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.213317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.213350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.213539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.267 [2024-11-26 07:38:08.213570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.267 qpair failed and we were unable to recover it. 00:28:40.267 [2024-11-26 07:38:08.213762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.213795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.213911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.213959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.214171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.214203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.214334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.214365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.214539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.214571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.214837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.214870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.215076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.215111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.215293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.215325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.215578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.215653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.215897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.215966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.216139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.216169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.216277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.216313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.216584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.216618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.216887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.216920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.217211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.217244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.217475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.217508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.217778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.217811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.217973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.217986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.218131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.218165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.218301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.218335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.218471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.218504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.218700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.218733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.218980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.219015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.219248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.219261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.219337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.219349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.219435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.219445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.219675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.219688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.219893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.219905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.220171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.220192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.220295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.220313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.220504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.220540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.220734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.220768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.220966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.220983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.221240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.221274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.221456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.221488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.221726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.221759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.222004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.222037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.222226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.268 [2024-11-26 07:38:08.222259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.268 qpair failed and we were unable to recover it. 00:28:40.268 [2024-11-26 07:38:08.222526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.222560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.222745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.222779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.222987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.223022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.223200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.223216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.223431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.223464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.223680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.223714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.223913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.223961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.224237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.224271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.224541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.224573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.224762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.224797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.225038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.225072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.225338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.225371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.225493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.225526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.225744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.225778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.225946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.225990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.226172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.226188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.226372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.226404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.226537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.226578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.226753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.226786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.227074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.227110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.227372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.227404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.227593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.227626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.227891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.227933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.228199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.228216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.228361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.228377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.228632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.228666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.228929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.228975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.229178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.229193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.229450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.229484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.229661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.229694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.229834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.229866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.230128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.230145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.230349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.230366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.230655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.230688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.230869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.230902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.231180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.231214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.231422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.231455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.231644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.231678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.269 qpair failed and we were unable to recover it. 00:28:40.269 [2024-11-26 07:38:08.231891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.269 [2024-11-26 07:38:08.231907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.232127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.232144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.232317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.232333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.232556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.232589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.232838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.232871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.233094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.233112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.233368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.233407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.233543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.233576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.234415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.234444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.234756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.234792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.234996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.235033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.235176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.235210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.235480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.235514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.235732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.235765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.235894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.235928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.236082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.236098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.236253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.236271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.236500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.236517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.236699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.236716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.236860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.236892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.237202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.237237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.237375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.237407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.237667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.237698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.237933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.237954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.241963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.241995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.242272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.242290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.242453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.242469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.242639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.242656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.242752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.242768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.242868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.242883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.242984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.243001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.243175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.243192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.243410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.243427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.243702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.243730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.243941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.243975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.244204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.244221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.244361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.244376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.244471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.244487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.270 [2024-11-26 07:38:08.244744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.270 [2024-11-26 07:38:08.244762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.270 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.244931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.244953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.245070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.245085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.245168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.245183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.245395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.245412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.245566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.245582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.245796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.245812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.245968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.245985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.246088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.246103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.246197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.246221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.246408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.246426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.246521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.246537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.246712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.246729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.246798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.246814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.246905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.246920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.247098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.247116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.247218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.247232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.247388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.247404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.247632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.247649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.247802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.247819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.248021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.248040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.248144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.248159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.248323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.248344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.248492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.248509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.248584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.248600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.248704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.248718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.248952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.248969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.249153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.249169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.249272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.249288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.249386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.249400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.249569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.249587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.249694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.249710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.249918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.249935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.250136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.271 [2024-11-26 07:38:08.250153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.271 qpair failed and we were unable to recover it. 00:28:40.271 [2024-11-26 07:38:08.250435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.250452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.250606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.250621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.250777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.250795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.251041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.251059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.251278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.251295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.251382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.251397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.251508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.251524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.251617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.251631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.251855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.251872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.252081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.252097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.252323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.252340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.252561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.252578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.252726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.252743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.252953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.252970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.253133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.253157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.253316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.253344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.253494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.253508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.253726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.253738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.253887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.253900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.254097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.254110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.254302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.254316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.254480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.254492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.254632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.254645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.254849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.254862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.255091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.255105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.255266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.255278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.255371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.255382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.255607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.255620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.255751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.255766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.255919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.255931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.256023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.256035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.256198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.256210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.256431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.256443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.256593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.256606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.256737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.256750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.256881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.256894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.257129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.257142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.257240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.272 [2024-11-26 07:38:08.257251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.272 qpair failed and we were unable to recover it. 00:28:40.272 [2024-11-26 07:38:08.257321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.257333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.257478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.257491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.257629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.257641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.257792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.257804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.258006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.258019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.258236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.258249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.258416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.258430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.258592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.258604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.258747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.258759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.258890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.258902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.259077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.259090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.259219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.259231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.259315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.259327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.259546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.259558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.259706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.259719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.259939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.259955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.260100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.260112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.260357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.260369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.260569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.260582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.260752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.260765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.260989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.261002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.261145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.261158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.261234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.261245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.261407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.261420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.261547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.261559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.261708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.261721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.261863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.261875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.261965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.261978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.262052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.262064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.262136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.262147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.262363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.262377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.262590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.262603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.262752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.262765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.262902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.262915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.263060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.263074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.263168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.263179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.263309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.263321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.263519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.273 [2024-11-26 07:38:08.263532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.273 qpair failed and we were unable to recover it. 00:28:40.273 [2024-11-26 07:38:08.263621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.263632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.263852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.263864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.263939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.263953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.264013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.264025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.264117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.264129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.264258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.264271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.264468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.264481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.264578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.264589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.264675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.264688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.264825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.264837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.265035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.265047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.265223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.265236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.265365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.265377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.265514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.265526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.265675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.265686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.265906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.265918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.265994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.266006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.266296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.266308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.274 [2024-11-26 07:38:08.266393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.274 [2024-11-26 07:38:08.266404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.274 qpair failed and we were unable to recover it. 00:28:40.558 [2024-11-26 07:38:08.266655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.558 [2024-11-26 07:38:08.266673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.558 qpair failed and we were unable to recover it. 00:28:40.558 [2024-11-26 07:38:08.266903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.558 [2024-11-26 07:38:08.266919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.558 qpair failed and we were unable to recover it. 00:28:40.558 [2024-11-26 07:38:08.267155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.558 [2024-11-26 07:38:08.267173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.558 qpair failed and we were unable to recover it. 00:28:40.558 [2024-11-26 07:38:08.267317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.558 [2024-11-26 07:38:08.267332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.558 qpair failed and we were unable to recover it. 00:28:40.558 [2024-11-26 07:38:08.267488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.558 [2024-11-26 07:38:08.267504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.558 qpair failed and we were unable to recover it. 00:28:40.558 [2024-11-26 07:38:08.267780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.558 [2024-11-26 07:38:08.267796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.558 qpair failed and we were unable to recover it. 00:28:40.558 [2024-11-26 07:38:08.267943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.267961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.268179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.268192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.268333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.268345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.268474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.268487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.268699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.268711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.268854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.268867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.269018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.269030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.269240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.269254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.269403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.269415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.269638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.269651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.269849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.269862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.269935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.269952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.270045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.270056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.270129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.270140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.270267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.270278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.270440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.270453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.270593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.270606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.270695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.270707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.270777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.270788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.270870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.270882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.270984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.270997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.271176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.271189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.271324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.271336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.271430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.271443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.271605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.271618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.271753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.271766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.271907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.271919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.272073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.272085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.272305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.272317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.559 [2024-11-26 07:38:08.272532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.559 [2024-11-26 07:38:08.272544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.559 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.272681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.272693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.272871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.272883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.273079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.273091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.273217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.273229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.273511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.273537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.273680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.273694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.273911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.273925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.274088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.274102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.274255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.274267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.274326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.274338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.274541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.274555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.274653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.274666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.274742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.274753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.274929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.274942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.275167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.275179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.275270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.275283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.275490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.275524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.275795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.275829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.276054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.276092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.276388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.276424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.276621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.276655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.276906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.276940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.277133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.277146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.277304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.277337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.277475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.277508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.277698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.277730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.277906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.277940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.278229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.560 [2024-11-26 07:38:08.278262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.560 qpair failed and we were unable to recover it. 00:28:40.560 [2024-11-26 07:38:08.278467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.278502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.278679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.278711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.278980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.279016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.279141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.279181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.279454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.279468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.279618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.279631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.279816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.279850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.279990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.280026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.280273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.280285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.280421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.280433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.280618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.280630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.280772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.280803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.280916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.280962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.281157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.281170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.281383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.281395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.281597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.281609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.281688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.281698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.281791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.281803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.281938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.281982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.282117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.282150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.282398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.282430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.282603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.282636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.282816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.282848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.283052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.283068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.283279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.283294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.283458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.283491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.283635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.283667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.283911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.283944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.284148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.561 [2024-11-26 07:38:08.284163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.561 qpair failed and we were unable to recover it. 00:28:40.561 [2024-11-26 07:38:08.284237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.284252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.284478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.284516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.284708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.284741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.284976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.285013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.285132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.285147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.285286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.285302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.285512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.285528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.285605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.285620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.285870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.285885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.286072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.286089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.286270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.286286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.286494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.286508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.286671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.286687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.286865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.286880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.287005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.287042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.287300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.287335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.287590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.287622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.287806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.287839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.288011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.288046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.288235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.288268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.288445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.288484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.288617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.288649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.288870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.288903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.289153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.289186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.289373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.289405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.289674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.289708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.289892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.289924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.290170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.290203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.290303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.562 [2024-11-26 07:38:08.290317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.562 qpair failed and we were unable to recover it. 00:28:40.562 [2024-11-26 07:38:08.290465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.290478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.290675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.290687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.290841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.290873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.291103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.291138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.291346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.291378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.291582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.291614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.291856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.291889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.292154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.292166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.292296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.292309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.292404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.292415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.292586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.292598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.292748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.292780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.292915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.292957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.293213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.293226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.293447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.293460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.293654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.293666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.293887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.293919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.294226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.294271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.294398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.294415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.294570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.294607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.294856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.294889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.295045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.295086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.295222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.295255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.295520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.295564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.295650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.295666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.295874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.295890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.295974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.295991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.296155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.296172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.563 [2024-11-26 07:38:08.296318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.563 [2024-11-26 07:38:08.296335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.563 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.296484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.296501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.296588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.296604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.296783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.296800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.296969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.297007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.297212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.297252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.297398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.297434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.297678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.297714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.297906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.297940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.298137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.298156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.298432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.298467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.298686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.298717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.298959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.298995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.299118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.299132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.299330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.299362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.299569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.299599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.299726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.299758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.299942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.299986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.300166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.300197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.300379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.300414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.300671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.300701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.300879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.300912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.564 qpair failed and we were unable to recover it. 00:28:40.564 [2024-11-26 07:38:08.301055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.564 [2024-11-26 07:38:08.301092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.301267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.301308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.301513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.301528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.301729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.301749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.301905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.301938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.302221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.302254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.302427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.302458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.302632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.302663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.302878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.302910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.303137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.303172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.303332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.303348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.303495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.303511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.303662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.303678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.303777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.303808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.304071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.304103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.304374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.304409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.304539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.304571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.304771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.304803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.304982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.305016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.305192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.305208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.305324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.305357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.305483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.305517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.305707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.305739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.305998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.306031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.306161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.306193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.306383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.306416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.306629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.306663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.306908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.306941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.307143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.307177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.307302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.565 [2024-11-26 07:38:08.307318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.565 qpair failed and we were unable to recover it. 00:28:40.565 [2024-11-26 07:38:08.307556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.307595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.307734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.307767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.307911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.307945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.308068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.308103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.308347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.308364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.308580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.308596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.308800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.308817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.308964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.308981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.309211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.309245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.309422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.309456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.309635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.309669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.309792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.309826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.310017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.310054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.310280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.310296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.310489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.310522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.310699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.310733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.310863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.310896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.311126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.311161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.311427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.311459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.311655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.311688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.311866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.311900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.312090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.312126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.312341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.312376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.312653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.312686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.312969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.313016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.313178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.313194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.313347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.313381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.313599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.313636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.313812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.313845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.314110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.314144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.566 [2024-11-26 07:38:08.314334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.566 [2024-11-26 07:38:08.314369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.566 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.314646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.314661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.314891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.314908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.315072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.315089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.315180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.315195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.315404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.315420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.315627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.315660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.315847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.315879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.316178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.316212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.316452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.316485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.316734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.316768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.317022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.317056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.317242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.317280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.317422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.317439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.317599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.317632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.317897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.317930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.318097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.318140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.318300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.318316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.318466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.318499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.318617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.318649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.318918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.318960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.319211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.319244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.319512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.319547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.319680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.319713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.319895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.319929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.320224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.320241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.320447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.320463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.320599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.320616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.320848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.320880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.321095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.321129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.321370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.321403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.567 [2024-11-26 07:38:08.321627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.567 [2024-11-26 07:38:08.321659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.567 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.321843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.321876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.322070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.322104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.322317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.322333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.322416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.322431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.322646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.322679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.322945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.322992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.323205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.323221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.323335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.323369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.323497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.323530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.323824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.323858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.324167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.324203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.324313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.324346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.324574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.324590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.324717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.324733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.324800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.324815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.325063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.325080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.325309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.325325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.325489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.325505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.325602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.325648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.325898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.325933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.326223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.326257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.326550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.326584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.326777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.326811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.327089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.327123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.327258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.327291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.327479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.327513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.327753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.327785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.568 qpair failed and we were unable to recover it. 00:28:40.568 [2024-11-26 07:38:08.327977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.568 [2024-11-26 07:38:08.328009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.328148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.328164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.328313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.328345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.328453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.328485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.328728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.328761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.328939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.328983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.329125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.329157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.329394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.329426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.329703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.329737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.330011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.330045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.330247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.330279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.330389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.330405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.330588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.330621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.330800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.330833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.331037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.331070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.331312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.331345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.331482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.331515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.331700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.331732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.331995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.332016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.332241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.332258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.332410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.332426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.332674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.332690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.332921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.332937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.333101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.333119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.333280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.333296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.333547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.333581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.333709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.333742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.333939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.333982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.334174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.569 [2024-11-26 07:38:08.334191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.569 qpair failed and we were unable to recover it. 00:28:40.569 [2024-11-26 07:38:08.334416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.334448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.334658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.334690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.334905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.334937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.335213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.335246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.335440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.335473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.335638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.335655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.335830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.335846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.335940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.335960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.336225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.336258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.336498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.336531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.336744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.336776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.336970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.337004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.337256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.337289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.337504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.337520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.337701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.337734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.337938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.337982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.338198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.338214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.338378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.338395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.338559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.338575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.338769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.338801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.339008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.339042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.339307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.339340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.339432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.339447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.339584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.339600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.339803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.339820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.340042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.340059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.340198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.570 [2024-11-26 07:38:08.340215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.570 qpair failed and we were unable to recover it. 00:28:40.570 [2024-11-26 07:38:08.340301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.340316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.340453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.340487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.340696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.340735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.340962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.340998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.341200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.341233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.341370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.341403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.341692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.341724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.341914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.341956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.342207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.342239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.342520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.342553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.342741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.342773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.343014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.343049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.343189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.343222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.343417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.343448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.343699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.343716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.343820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.343836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.344083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.344117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.344404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.344437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.344612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.344628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.344829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.344845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.344995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.345012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.345085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.345100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.345252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.345268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.345496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.345512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.345740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.345757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.345904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.345919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.346140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.346157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.346369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.571 [2024-11-26 07:38:08.346403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.571 qpair failed and we were unable to recover it. 00:28:40.571 [2024-11-26 07:38:08.346604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.346636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.346829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.346862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.347061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.347096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.347281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.347313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.347554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.347571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.347724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.347740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.347899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.347932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.348119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.348153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.348391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.348424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.348672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.348689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.348892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.348909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.349092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.349108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.349275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.349291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.349542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.349576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.349830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.349869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.350047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.350081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.350209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.350242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.350419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.350451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.350713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.350746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.350989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.351025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.351272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.351304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.351440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.351456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.351665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.351698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.351811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.351845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.352108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.352142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.352313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.352330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.352493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.352526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.352710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.352743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.352955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.352990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.353164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.572 [2024-11-26 07:38:08.353197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.572 qpair failed and we were unable to recover it. 00:28:40.572 [2024-11-26 07:38:08.353387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.353419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.353616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.353649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.353853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.353885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.354132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.354164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.354388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.354405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.354493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.354507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.354661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.354701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.354844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.354877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.355075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.355109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.355424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.355456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.355657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.355689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.355831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.355864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.356131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.356164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.356340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.356373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.356497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.356529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.356707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.356739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.357006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.357039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.357236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.357253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.357398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.357429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.357607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.357639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.357833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.357864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.358061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.358096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.358363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.358379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.358592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.358608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.358700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.358719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.358858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.358875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.359027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.359044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.359247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.359263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.359357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.359371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.359533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.359565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.573 qpair failed and we were unable to recover it. 00:28:40.573 [2024-11-26 07:38:08.359778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.573 [2024-11-26 07:38:08.359810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.360077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.360111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.360376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.360408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.360592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.360624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.360835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.360868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.361040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.361074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.361337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.361381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.361543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.361558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.361680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.361714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.361895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.361927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.362216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.362251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.362515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.362549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.362836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.362869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.363150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.363184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.363375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.363407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.363599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.363615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.363844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.363860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.364130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.364164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.364341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.364357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.364512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.364544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.364764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.364797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.364936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.364980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.365222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.365256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.365497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.365529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.365802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.365818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.365973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.365991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.366129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.366145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.366312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.366346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.366644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.366677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.366798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.366827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.367069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.574 [2024-11-26 07:38:08.367103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.574 qpair failed and we were unable to recover it. 00:28:40.574 [2024-11-26 07:38:08.367233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.367266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.367528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.367562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.367759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.367791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.367924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.367975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.368172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.368204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.368497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.368530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.368722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.368753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.368995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.369030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.369224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.369241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.369459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.369492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.369617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.369649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.369904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.369936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.370131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.370164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.370274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.370306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.370506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.370538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.370729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.370771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.371003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.371021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.371242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.371259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.371409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.371426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.371564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.371581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.371761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.371794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.372012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.372046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.372291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.372323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.372557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.372574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.372789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.372822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.372941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.372984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.373103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.373135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.575 [2024-11-26 07:38:08.373349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.575 [2024-11-26 07:38:08.373382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.575 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.373645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.373679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.373809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.373843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.374091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.374125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.374361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.374393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.374566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.374583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.374735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.374751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.375011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.375047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.375227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.375258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.375450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.375466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.375713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.375745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.375934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.375976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.376241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.376273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.376438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.376455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.376624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.376657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.376945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.376987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.377254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.377292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.377407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.377423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.377667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.377699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.377963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.377998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.378173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.378204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.378313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.378357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.378526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.378541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.378705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.576 [2024-11-26 07:38:08.378738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.576 qpair failed and we were unable to recover it. 00:28:40.576 [2024-11-26 07:38:08.378855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.378888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.379145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.379180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.379368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.379400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.379584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.379617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.379803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.379835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.380026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.380061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.380336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.380352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.380488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.380504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.380728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.380744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.380897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.380929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.381154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.381188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.381311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.381344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.381548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.381563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.381731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.381764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.381959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.381994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.382205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.382237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.382415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.382432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.382699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.382715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.382924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.382984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.383136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.383169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.383363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.383404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.383633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.383650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.383854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.383887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.384015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.384050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.384226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.384259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.384400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.384432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.384623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.384658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.384857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.384891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.385167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.385201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.577 qpair failed and we were unable to recover it. 00:28:40.577 [2024-11-26 07:38:08.385462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.577 [2024-11-26 07:38:08.385504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.385648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.385665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.385776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.385808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.386049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.386091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.386307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.386339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.386599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.386631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.386759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.386792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.386936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.386978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.387157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.387191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.387307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.387339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.387564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.387595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.387839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.387872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.388164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.388200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.388402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.388435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.388625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.388658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.388853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.388886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.389108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.389142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.389398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.389431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.389616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.389633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.389862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.389893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.390175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.390209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.390482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.390499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.390726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.390742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.390909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.390925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.391044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.391073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.391260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.391299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.391546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.391581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.391848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.391882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.392088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.578 [2024-11-26 07:38:08.392125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.578 qpair failed and we were unable to recover it. 00:28:40.578 [2024-11-26 07:38:08.392313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.392346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.392596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.392633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.392759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.392792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.392974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.393008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.393191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.393224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.393399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.393433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.393702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.393718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.393874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.393889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.394089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.394102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.394237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.394249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.394497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.394510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.394713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.394725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.394875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.394888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.395095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.395128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.395304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.395342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.395614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.395648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.395852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.395887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.396030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.396065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.396191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.396203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.396351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.396363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.396558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.396571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.396729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.396741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.396877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.396911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.397130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.397164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.397294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.397328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.397503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.397515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.397665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.397677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.397817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.397829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.397978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.397992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.579 qpair failed and we were unable to recover it. 00:28:40.579 [2024-11-26 07:38:08.398218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.579 [2024-11-26 07:38:08.398251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.398468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.398502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.398769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.398781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.398976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.398989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.399211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.399234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.399322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.399333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.399478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.399492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.399644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.399677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.399865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.399899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.400154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.400200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.400425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.400437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.400583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.400606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.400736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.400759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.400843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.400855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.401093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.401127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.401427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.401439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.401577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.401590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.401742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.401753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.401913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.401926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.402142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.402176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.402355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.402367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.402509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.402522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.402728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.402740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.402995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.403008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.403205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.403217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.403389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.403403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.403629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.403664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.403966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.404002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.404256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.580 [2024-11-26 07:38:08.404289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.580 qpair failed and we were unable to recover it. 00:28:40.580 [2024-11-26 07:38:08.404538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.404552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.404692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.404703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.404858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.404891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.405116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.405153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.405415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.405455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.405680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.405692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.405779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.405789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.406013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.406048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.406312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.406324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.407923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.407952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.408163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.408201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.408324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.408343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.408516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.408552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.408770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.408804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.408987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.409024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.409217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.409251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.409426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.409442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.409541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.409557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.409766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.409781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.409990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.410009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.410114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.410132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.410295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.410330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.410606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.410641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.410838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.410881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.411089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.411126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.411257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.411290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.411475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.411508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.411690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.411724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.411855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.411890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.412120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.412164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.581 qpair failed and we were unable to recover it. 00:28:40.581 [2024-11-26 07:38:08.412377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.581 [2024-11-26 07:38:08.412400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.412580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.412605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.412828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.412855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.413043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.413069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.415744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.415771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.416034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.416074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.416269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.416305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.416444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.416476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.416703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.416736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.416886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.416926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.417116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.417150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.417281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.417293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.417432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.417445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.417584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.417598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.417674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.417684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.417883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.417896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.418120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.418134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.418226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.418261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.418460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.418491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.418631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.418662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.418874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.418912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.419134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.419168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.419363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.419395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.419580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.419611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.419781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.419813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.582 [2024-11-26 07:38:08.420018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.582 [2024-11-26 07:38:08.420052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.582 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.420302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.420334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.420437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.420449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.420643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.420674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.420807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.420839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.421103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.421136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.421406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.421438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.421678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.421710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.421816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.421829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.421911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.421922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.422124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.422136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.422266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.422278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.422482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.422513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.422681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.422713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.422916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.422959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.423096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.423127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.423416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.423449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.423692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.423724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.423982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.424015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.424217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.424249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.424456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.424468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.424555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.424566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.424785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.424796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.425020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.425032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.425175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.425208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.425467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.425500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.425710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.425722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.425867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.425878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.426035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.426048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.426234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.583 [2024-11-26 07:38:08.426246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.583 qpair failed and we were unable to recover it. 00:28:40.583 [2024-11-26 07:38:08.426416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.426448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.426693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.426725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.426986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.427014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.427158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.427171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.427264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.427275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.427491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.427512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.427781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.427813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.428060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.428094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.428314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.428347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.428560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.428593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.428785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.428817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.429065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.429099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.429365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.429397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.429527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.429558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.429796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.429808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.429970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.430003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.430200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.430231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.430484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.430517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.430685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.430697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.430929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.430975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.431170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.431201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.431357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.431389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.431578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.431589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.431747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.431778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.431981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.432015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.432150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.432182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.432355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.432388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.432571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.584 [2024-11-26 07:38:08.432605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.584 qpair failed and we were unable to recover it. 00:28:40.584 [2024-11-26 07:38:08.432799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.432831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.433026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.433059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.433252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.433284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.433398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.433409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.433558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.433569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.433742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.433754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.433988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.434000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.434217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.434250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.434428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.434462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.434663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.434696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.434939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.434982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.435200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.435233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.435362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.435395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.435649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.435683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.435805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.435836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.435971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.436006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.436211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.436243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.436454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.436492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.436747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.436782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.437069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.437104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.437395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.437429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.437715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.437748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.438019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.438053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.438322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.438356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.438484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.438516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.438705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.438738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.438924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.438965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.439238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.439270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.439517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.439529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.585 qpair failed and we were unable to recover it. 00:28:40.585 [2024-11-26 07:38:08.439689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.585 [2024-11-26 07:38:08.439701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.439952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.439965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.440141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.440153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.440370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.440383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.440479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.440489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.440653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.440664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.440931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.440996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.441209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.441242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.441428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.441462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.441674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.441685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.441912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.441945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.442092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.442124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.442387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.442420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.442696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.442728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.442986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.443020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.443226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.443259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.443447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.443481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.443622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.443634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.443810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.443841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.443966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.444000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.444124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.444156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.444289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.444320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.444437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.444469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.444728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.444741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.444876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.444889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.445055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.445088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.445231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.445266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.445508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.586 [2024-11-26 07:38:08.445520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.586 qpair failed and we were unable to recover it. 00:28:40.586 [2024-11-26 07:38:08.445743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.445757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.445965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.445978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.446163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.446195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.446390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.446401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.446484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.446495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.446574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.446604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.446808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.446841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.447104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.447139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.447385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.447418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.447565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.447597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.447799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.447810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.448042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.448075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.448268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.448301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.448492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.448524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.448635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.448645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.448776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.448787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.448886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.448897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.449001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.449012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.449208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.449221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.449370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.449382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.449513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.449525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.449694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.449727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.449970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.450004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.450142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.450175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.450364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.450376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.450469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.450480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.450649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.450662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.450758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.450790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.451089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.451123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.451368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.451400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.587 [2024-11-26 07:38:08.451520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.587 [2024-11-26 07:38:08.451553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.587 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.451718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.451731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.451870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.451882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.452096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.452109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.452311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.452323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.452470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.452482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.452744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.452777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.452897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.452930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.453219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.453253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.453382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.453414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.453552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.453591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.453785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.453819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.454078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.454113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.454311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.454343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.454579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.454613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.454807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.454840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.454971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.455006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.455203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.455235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.455498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.455531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.455822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.455835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.456032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.456046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.456121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.456132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.456309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.456322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.456546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.456558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.456819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.588 [2024-11-26 07:38:08.456852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.588 qpair failed and we were unable to recover it. 00:28:40.588 [2024-11-26 07:38:08.457141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.457176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.457391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.457424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.457671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.457703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.457914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.457958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.458229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.458263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.458493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.458525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.458786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.458799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.459025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.459038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.459202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.459215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.459309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.459319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.459561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.459574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.459852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.459884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.460157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.460191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.460413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.460446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.460587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.460599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.460762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.460775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.460872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.460883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.461027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.461041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.461261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.461272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.461497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.461509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.461678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.461691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.461837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.461850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.462057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.462092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.462212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.462245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.462514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.462547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.462829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.462844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.463043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.463072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.463227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.463239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.463413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.463444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.463686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.463717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.463964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.463999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.589 [2024-11-26 07:38:08.464250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.589 [2024-11-26 07:38:08.464282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.589 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.464547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.464579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.464850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.464884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.465122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.465157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.465354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.465396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.465546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.465558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.465701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.465760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.466071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.466105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.466302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.466335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.466569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.466602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.466866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.466879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.467026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.467039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.467190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.467202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.467372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.467385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.467473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.467484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.467562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.467573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.467732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.467766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.468036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.468070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.468316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.468349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.468527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.468560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.468699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.468731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.468860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.468872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.469084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.469120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.469333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.469365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.469639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.469672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.469954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.469966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.470144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.470177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.470451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.470491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.470621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.470634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.590 qpair failed and we were unable to recover it. 00:28:40.590 [2024-11-26 07:38:08.470797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.590 [2024-11-26 07:38:08.470810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.470954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.470987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.471189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.471223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.471466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.471500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.471695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.471728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.471924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.471942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.472194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.472207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.472406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.472418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.472621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.472633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.472803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.472843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.473077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.473113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.473320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.473360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.473443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.473454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.474423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.474453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.474631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.474645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.474772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.474806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.475024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.475061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.475205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.475238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.475445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.475479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.475728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.475760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.475885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.475920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.476121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.476197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.476421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.476459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.476607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.476624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.476796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.476829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.477075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.477111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.477234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.477268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.477414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.477431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.477607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.477639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.477841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.591 [2024-11-26 07:38:08.477874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.591 qpair failed and we were unable to recover it. 00:28:40.591 [2024-11-26 07:38:08.478070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.478102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.478378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.478411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.478570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.478604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.478795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.478812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.478983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.479018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.479146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.479179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.479325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.479358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.479559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.479592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.479794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.479826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.480036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.480072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.480257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.480290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.480495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.480511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.480678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.480696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.480856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.480889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.481095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.481130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.481366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.481406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.481559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.481594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.481786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.481820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.481975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.482010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.482266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.482300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.482507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.482540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.482787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.482820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.483125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.483160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.483360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.483394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.483519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.483535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.483712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.483744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.483929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.483973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.484125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.484158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.592 [2024-11-26 07:38:08.484289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.592 [2024-11-26 07:38:08.484322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.592 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.484622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.484656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.484780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.484817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.485031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.485048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.485200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.485217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.485313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.485328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.485535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.485553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.485693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.485709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.485873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.485905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.486044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.486080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.486280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.486313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.486513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.486547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.486747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.486763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.486972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.486989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.487101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.487120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.487269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.487282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.487435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.487449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.487630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.487643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.487805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.487852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.487973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.488008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.488268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.488301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.488489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.488524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.488701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.488714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.488858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.488893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.489187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.489223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.489401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.489435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.489718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.489731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.489893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.489910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.490047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.490059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.490152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.490164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.593 [2024-11-26 07:38:08.490332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.593 [2024-11-26 07:38:08.490346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.593 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.490546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.490559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.490663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.490675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.490772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.490783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.491003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.491018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.491175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.491188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.491330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.491343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.491490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.491504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.491673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.491686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.491845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.491878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.492064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.492099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.492685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.492706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.492807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.492821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.493094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.493109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.493286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.493319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.493513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.493547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.493745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.493778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.494044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.494058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.494230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.494243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.494394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.494407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.494500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.494511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.494704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.494717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.495158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.495177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.495342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.495356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.495536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.495549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.594 qpair failed and we were unable to recover it. 00:28:40.594 [2024-11-26 07:38:08.495703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.594 [2024-11-26 07:38:08.495715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.495856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.495869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.496021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.496035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.496137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.496170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.496373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.496409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.496635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.496676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.496762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.496775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.496851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.496863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.497026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.497039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.497192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.497224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.497346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.497378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.497619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.497654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.497778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.497819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.498014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.498050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.498232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.498264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.498483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.498515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.498809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.498841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.499044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.499079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.499227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.499261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.499457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.499489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.499685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.499719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.499969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.500004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.500205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.500237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.500502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.500534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.500853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.500888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.501035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.501069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.501987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.502011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.502257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.502293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.502497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.502530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.595 qpair failed and we were unable to recover it. 00:28:40.595 [2024-11-26 07:38:08.502816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.595 [2024-11-26 07:38:08.502848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.503091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.503129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.503416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.503450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.503588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.503601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.503778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.503813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.504005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.504040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.504169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.504202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.504384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.504423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.504648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.504682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.504875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.504907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.505160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.505221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.505498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.505534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.505719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.505736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.505912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.505955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.506099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.506133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.506271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.506303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.506437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.506471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.507008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.507053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.507284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.507323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.507484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.507520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.508157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.508184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.508376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.508395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.508551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.508568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.508667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.508688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.508920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.508937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.509104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.509121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.509343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.509361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.509622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.509639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.509871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.509890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.510049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.510066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.596 [2024-11-26 07:38:08.510276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.596 [2024-11-26 07:38:08.510293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.596 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.510410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.510426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.510628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.510645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.510804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.510821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.511037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.511054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.511163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.511180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.511270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.511286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.511510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.511528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.511627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.511645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.511888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.511904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.512007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.512021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.512253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.512266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.512514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.512528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.512670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.512684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.512904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.512918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.513009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.513021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.513125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.513137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.513220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.513232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.513333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.513345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.513596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.513608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.513697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.513708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.513929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.513943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.514027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.514039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.514183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.514195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.514289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.514301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.514456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.514469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.514606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.514619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.514866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.514879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.514965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.514977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.515151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.515166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.597 [2024-11-26 07:38:08.515306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.597 [2024-11-26 07:38:08.515319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.597 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.515412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.515424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.515505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.515517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.515775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.515791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.516054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.516068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.516161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.516172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.516328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.516341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.516492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.516505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.516718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.516730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.516955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.516968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.517107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.517120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.517299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.517312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.517562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.517576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.517820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.517834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.518053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.518076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.518225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.518238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.518458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.518471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.518647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.518659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.518848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.518861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.519028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.519042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.519183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.519195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.519420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.519433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.519584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.519596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.519750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.519763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.519834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.519845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.519998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.520012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.520157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.520170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.520317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.520330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.520544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.520557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.520706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.520719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.520896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.520937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.598 [2024-11-26 07:38:08.521124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.598 [2024-11-26 07:38:08.521143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.598 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.521331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.521348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.521497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.521514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.521694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.521710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.521892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.521910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.522010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.522026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.522185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.522201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.522375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.522392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.522541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.522557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.522691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.522703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.522959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.522973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.523172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.523186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.523341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.523356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.523453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.523464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.523615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.523629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.523699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.523710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.523860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.523873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.524085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.524098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.524185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.524197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.524353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.524367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.524590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.524603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.524812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.524826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.524923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.524935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.525031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.525050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.525260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.525277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.525416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.525434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.525654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.525670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.525769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.525784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.525938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.525962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.526061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.526075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.526283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.526301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.526450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.599 [2024-11-26 07:38:08.526467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.599 qpair failed and we were unable to recover it. 00:28:40.599 [2024-11-26 07:38:08.526635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.526653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.526758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.526774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.526995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.527014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.527269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.527286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.527428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.527445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.527590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.527606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.527694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.527710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.527863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.527882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.528024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.528041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.528199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.528216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.528362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.528379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.528628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.528646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.528752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.528769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.528921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.528937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.529089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.529106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.529276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.529293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.529443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.529459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.529631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.529648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.529738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.529754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.529982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.529999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.530254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.530271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.530431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.530447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.530655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.530671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.530901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.530918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.531189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.531207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.531423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.531439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.531619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.531635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.531844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.531860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.531961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.531980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.600 qpair failed and we were unable to recover it. 00:28:40.600 [2024-11-26 07:38:08.532232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.600 [2024-11-26 07:38:08.532249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.532345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.532361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.532603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.532619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.532759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.532776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.532935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.532958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.533131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.533146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.533290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.533306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.533390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.533404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.533556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.533572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.533727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.533744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.533886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.533903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.534045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.534062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.534243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.534259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.534415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.534431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.534587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.534604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.534744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.534760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.534912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.534929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.535092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.535108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.535260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.535279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.535511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.535528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.535758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.535775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.536022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.536039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.536206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.536221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.536371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.536383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.536601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.536613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.536691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.536703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.536937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.536956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.537089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.537102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.537273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.537285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.537355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.537367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.537523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.537535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.537693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.537705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.537960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.537973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.538116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.538140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.538377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.538389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.601 [2024-11-26 07:38:08.538557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.601 [2024-11-26 07:38:08.538570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.601 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.538790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.538802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.538899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.538912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.539076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.539102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.539239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.539252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.539406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.539418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.539634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.539648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.539892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.539904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.540090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.540103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.540248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.540261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.540420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.540432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.540592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.540604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.540778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.540790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.541039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.541053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.541188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.541201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.541402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.541415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.541558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.541570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.541631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.541642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.541802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.541816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.541957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.541970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.542138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.542150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.542374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.542386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.542471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.542482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.542703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.542717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.542855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.542869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.543086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.543100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.543235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.543248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.543389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.543401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.543475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.543487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.543707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.543720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.543873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.543885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.544123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.544137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.544287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.544300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.544399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.544411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.544578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.544591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.544677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.544689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.602 qpair failed and we were unable to recover it. 00:28:40.602 [2024-11-26 07:38:08.544816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.602 [2024-11-26 07:38:08.544828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.544973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.544986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.545073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.545085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.545233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.545245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.545331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.545342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.545499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.545511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.545663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.545677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.545871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.545885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.546102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.546115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.546380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.546393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.546539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.546552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.546698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.546711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.546854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.546867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.547051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.547065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.547233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.547246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.547492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.547505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.547653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.547665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.547883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.547896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.548122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.548135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.548212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.548233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.548458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.548471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.548568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.548581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.548664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.548675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.548894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.548907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.549052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.549065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.549285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.549297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.549470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.549483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.549621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.549636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.549860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.549872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.549961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.549973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.550038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.550050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.550259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.550271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.550399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.550411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.550551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.550564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.550761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.603 [2024-11-26 07:38:08.550773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.603 qpair failed and we were unable to recover it. 00:28:40.603 [2024-11-26 07:38:08.550902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.550914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.551054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.551068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.551197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.551210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.551342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.551355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.551450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.551462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.551545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.551556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.551656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.551667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.551911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.551923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.552097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.552110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.552240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.552253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.552426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.552439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.552605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.552618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.552841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.552854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.552928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.552938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.553085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.553098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.553259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.553272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.553410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.553423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.553578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.553591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.553802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.553815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.554012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.554026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.554111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.554122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.554330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.554342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.554525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.554538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.554753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.554765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.554924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.554937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.555024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.555036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.555249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.555262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.555414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.555426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.555569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.555581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.555718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.555731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.555893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.555905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.556137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.556150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.556284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.556300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.604 [2024-11-26 07:38:08.556493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.604 [2024-11-26 07:38:08.556505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.604 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.556584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.556595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.556823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.556836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.557016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.557029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.557112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.557123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.557223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.557235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.557367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.557379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.557546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.557559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.557776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.557788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.557919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.557932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.558144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.558158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.558291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.558303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.558458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.558484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.558619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.558631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.558771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.558784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.559008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.559024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.559193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.559206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.559276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.559287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.559416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.559428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.559514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.559526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.559659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.559672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.559815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.559828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.559967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.559980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.560136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.560149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.560369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.560381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.560519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.560531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.560696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.560709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.560829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.560841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.561063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.561078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.561316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.561329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.561471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.561483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.561633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.561646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.605 qpair failed and we were unable to recover it. 00:28:40.605 [2024-11-26 07:38:08.561870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.605 [2024-11-26 07:38:08.561883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.562041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.562058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.562280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.562294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.562467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.562480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.562628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.562641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.562816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.562829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.562987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.563000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.563131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.563146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.563405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.563416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.563622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.563635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.563840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.563852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.564018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.564030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.564235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.564248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.564328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.564339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.564547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.564559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.564754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.564765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.564989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.565001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.565219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.565232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.565382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.565394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.565563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.565575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.565720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.565732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.565891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.565904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.565984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.565995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.566076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.566087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.566284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.566296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.566515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.566526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.566653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.566664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.566761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.566773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.566981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.566994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.567071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.567082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.567224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.567236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.567318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.567328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.567470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.567482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.567558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.567568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.567645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.567655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.567829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.567840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.567966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.567979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.568178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.568190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.568320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.568332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.568564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.568576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.606 qpair failed and we were unable to recover it. 00:28:40.606 [2024-11-26 07:38:08.568786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.606 [2024-11-26 07:38:08.568798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.568960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.568973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.569189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.569201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.569416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.569428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.569492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.569503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.569700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.569712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.569930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.569942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.570178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.570192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.570327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.570339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.570425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.570436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.570582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.570594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.570744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.570755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.570900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.570912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.571180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.571192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.571409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.571421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.571575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.571586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.571805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.571817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.572054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.572067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.572323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.572335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.572480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.572492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.572621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.572633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.572814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.572827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.572958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.572971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.573135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.573147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.573275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.573287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.573368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.573380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.573517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.573529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.573697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.573710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.573860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.573871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.574046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.574058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.574298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.574310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.574449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.574461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.574603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.574614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.574816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.574828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.574976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.574988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.575229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.575241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.575379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.575391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.575619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.575636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.575854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.575866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.576115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.576127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.607 [2024-11-26 07:38:08.576328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.607 [2024-11-26 07:38:08.576340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.607 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.576538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.576550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.576689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.576700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.576940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.576966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.577187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.577199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.577421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.577433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.577564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.577576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.577779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.577794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.577939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.577955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.578104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.578116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.578333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.578345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.578564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.578576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.578727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.578739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.578979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.578991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.579079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.579090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.579235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.579247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.579441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.579453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.579607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.579619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.579764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.579776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.580002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.580014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.580153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.580165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.580342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.580355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.580486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.580498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.580758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.580770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.580974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.580986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.581153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.581165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.581383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.581395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.581600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.581612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.581701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.581711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.581842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.581854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.581998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.582010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.582165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.582177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.582433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.582445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.582591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.582603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.582800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.582812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.583052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.583064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.583335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.583347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.583484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.583496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.583624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.583636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.583832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.583844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.608 [2024-11-26 07:38:08.584065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.608 [2024-11-26 07:38:08.584077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.608 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.584213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.584226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.584435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.584448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.584607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.584619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.584784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.584797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.585020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.585034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.585165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.585178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.585353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.585367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.585507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.585518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.585711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.585723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.585918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.585930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.586072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.586084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.586327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.586338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.586421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.586432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.586511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.586522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.586733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.586744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.586899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.586911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.587096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.587109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.587354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.587366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.587577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.587589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.587783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.587801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.587952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.587964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.588120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.588132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.588268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.588281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.588415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.588427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.588644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.588656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.588870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.588882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.589144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.589156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.589382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.589394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.589561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.589573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.589776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.589789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.609 [2024-11-26 07:38:08.590005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.609 [2024-11-26 07:38:08.590018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.609 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.590242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.590254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.590405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.590418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.590633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.590646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.590790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.590802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.591001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.591014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.591261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.591273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.591404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.591415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.591631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.591643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.591804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.591816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.591978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.591991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.592134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.592146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.592361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.592373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.592513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.592525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.592676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.592688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.592941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.592965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.593186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.593202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.593331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.593343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.593568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.593580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.593829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.593841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.593980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.593992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.594195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.594207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.594443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.594455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.594631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.594644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.594863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.594875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.595009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.595021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.595170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.595182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.595321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.595333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.595537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.595550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.595750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.595761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.595854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.595867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.596032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.596045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.596180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.596192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.596357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.596369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.596525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.596537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.596710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.596723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.596877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.596889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.596959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.596971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.597204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.597216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.597375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.597387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.597607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.597620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.597829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.597842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.597984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.597997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.598092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.610 [2024-11-26 07:38:08.598114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.610 qpair failed and we were unable to recover it. 00:28:40.610 [2024-11-26 07:38:08.598350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.598367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.598560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.598577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.598806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.598822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.599054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.599072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.599322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.599338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.599473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.599490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.599584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.599599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.599799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.599816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.600000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.600017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.600170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.600186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.600391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.600408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.600553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.600570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.600658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.600677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.600837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.600854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.601017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.601033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.601263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.601280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.601428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.601444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.601650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.601667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.601758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.601773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.602005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.602022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.602224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.602240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.602346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.602362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.602577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.602593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.602847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.602863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.603094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.603111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.603336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.603352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.603504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.603520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.603726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.603743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.603962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.603979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.604127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.604143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.604347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.604364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.604533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.604549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.604765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.604781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.604961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.604978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.605186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.605202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.605403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.605419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.605590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.605606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.605810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.611 [2024-11-26 07:38:08.605826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.611 qpair failed and we were unable to recover it. 00:28:40.611 [2024-11-26 07:38:08.606100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.606117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.606286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.606323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.606586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.606605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.606753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.606766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.606990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.607003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.607235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.607247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.607407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.607420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.607559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.607571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.607714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.607726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.607892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.607905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.608098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.608111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.608333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.608344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.608546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.608559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.608779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.608791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.609009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.609024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.609220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.609233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.609458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.609470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.609616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.609628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.609844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.609857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.610053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.610065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.610262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.610273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.610484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.610496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.610579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.610590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.610804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.610815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.611024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.611036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.611183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.611196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.611432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.611444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.611608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.611620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.611796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.611808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.612029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.612042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.612185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.612197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.612362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.612374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.612449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.612460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.612592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.612603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.612760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.612772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.612987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.613000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.613184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.613197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.613338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.613350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.613593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.613604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.612 qpair failed and we were unable to recover it. 00:28:40.612 [2024-11-26 07:38:08.613809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.612 [2024-11-26 07:38:08.613821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.614066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.614079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.614161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.614180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.614397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.614414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.614616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.614632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.614730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.614747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.614832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.614847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.615076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.615094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.615262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.615278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.615441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.615456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.615697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.615713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.615855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.615871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.616027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.616044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.616179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.616196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.616357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.616373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.616598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.616615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.616715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.616732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.616823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.616838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.617061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.617080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.617218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.617234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.617385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.617402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.617626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.617641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.617879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.617895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.618143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.618160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.618385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.618401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.618564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.618580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.618671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.618687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.618890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.618906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.619055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.619072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.619295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.619316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.619477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.619493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.619599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.619615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.619841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.619857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.620068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.620084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.620243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.620259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.620485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.620501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.620656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.620673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.613 [2024-11-26 07:38:08.620897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.613 [2024-11-26 07:38:08.620913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.613 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.621103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.621121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.621331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.621348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.621581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.621597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.621759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.621775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.621859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.621874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.622085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.622104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.622260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.622275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.622472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.622488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.622653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.622669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.622820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.622836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.622985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.623003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.623231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.623247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.623484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.623500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.623675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.623692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.623833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.623849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.624002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.624019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.624185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.624201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.624415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.624430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.624633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.624652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.624732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.624747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.624954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.624970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.625137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.625153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.625378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.625395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.625621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.625638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.625810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.625827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.625968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.625986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.626195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.626211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.626436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.626452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.626548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.614 [2024-11-26 07:38:08.626563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.614 qpair failed and we were unable to recover it. 00:28:40.614 [2024-11-26 07:38:08.626733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.615 [2024-11-26 07:38:08.626749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.615 qpair failed and we were unable to recover it. 00:28:40.615 [2024-11-26 07:38:08.626926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.615 [2024-11-26 07:38:08.626942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.615 qpair failed and we were unable to recover it. 00:28:40.615 [2024-11-26 07:38:08.627190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.615 [2024-11-26 07:38:08.627207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.615 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.627436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.627452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.627606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.627622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.627826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.627842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.627979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.627995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.628158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.628174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.628387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.628403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.628495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.628510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.628663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.628679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.628885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.628901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.629053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.629070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.629207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.629223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.629360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.629376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.629590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.629606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.629777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.629793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.897 qpair failed and we were unable to recover it. 00:28:40.897 [2024-11-26 07:38:08.629961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.897 [2024-11-26 07:38:08.629978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.630215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.630231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.630386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.630401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.630694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.630710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.630805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.630820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.630999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.631018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.631221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.631237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.631454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.631470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.631705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.631722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.631902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.631918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.632076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.632093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.632178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.632193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.632289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.632305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.632515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.632535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.632692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.632708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.632861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.632896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.633029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.633064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.633196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.633229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.633407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.633440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.633627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.633660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.633960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.633996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.634253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.634287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.634413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.634446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.634701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.634735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.634893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.634909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.635061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.635077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.635278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.635319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.635604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.635638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.635827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.635862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.635992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.636009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.636116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.636132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.636289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.636304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.636462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.636496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.636701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.636734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.636973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.637009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.637212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.637228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.637452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.637469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.637722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.898 [2024-11-26 07:38:08.637738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.898 qpair failed and we were unable to recover it. 00:28:40.898 [2024-11-26 07:38:08.637893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.637910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.638159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.638176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.638262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.638278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.638547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.638580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.638696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.638728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.638977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.639012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.639163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.639210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.639412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.639428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.639647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.639663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.639818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.639834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.639922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.639938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.640046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.640061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.640215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.640248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.640434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.640467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.640695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.640729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.640918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.640960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.641104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.641137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.641378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.641411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.641692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.641727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.641995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.642030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.642313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.642347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.642569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.642602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.642815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.642848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.643039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.643055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.643236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.643270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.643446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.643479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.643611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.643645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.643902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.643935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.644201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.644241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.644367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.644399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.644536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.644570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.644686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.644720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.644845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.644878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.645157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.645192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.645299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.645332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.645595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.645612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.645705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.645720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.899 [2024-11-26 07:38:08.645864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.899 [2024-11-26 07:38:08.645901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.899 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.646038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.646073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.646266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.646299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.646577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.646611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.646857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.646890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.647021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.647038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.647202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.647219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.647373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.647405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.647671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.647705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.647983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.648017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.648202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.648236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.648412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.648430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.648627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.648644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.648854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.648889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.649099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.649135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.649293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.649327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.649529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.649563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.649746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.649780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.650037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.650056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.650159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.650195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.650434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.650467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.650734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.650768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.651010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.651047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.651233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.651267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.651453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.651468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.651660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.651676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.651890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.651934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.652081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.652115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.652375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.652408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.652630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.652664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.652905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.652938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.653089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.653110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.653337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.653371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.653610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.653643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.653845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.653879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.654147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.654182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.654357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.654373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.654529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.900 [2024-11-26 07:38:08.654545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.900 qpair failed and we were unable to recover it. 00:28:40.900 [2024-11-26 07:38:08.654708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.654725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.654972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.655008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.655193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.655227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.655420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.655455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.655659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.655692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.655894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.655928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.656129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.656175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.656287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.656303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.656390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.656406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.656630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.656647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.656796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.656812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.656973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.657008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.657150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.657183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.657373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.657406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.657693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.657725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.657996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.658030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.658214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.658229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.658322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.658356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.658551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.658585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.658772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.658804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.658991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.659017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.659095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.659110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.659267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.659283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.659431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.659463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.659715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.659748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.659885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.659918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.660115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.660132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.660297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.660332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.660634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.660666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.660943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.660990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.661110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.661126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.661216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.661231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.661379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.661395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.661552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.661569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.661708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.661724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.661880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.661896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.662086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.662103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.662317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.901 [2024-11-26 07:38:08.662351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.901 qpair failed and we were unable to recover it. 00:28:40.901 [2024-11-26 07:38:08.662591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.662624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.662876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.662909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.663203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.663237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.663418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.663452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.663726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.663767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.663911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.663928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.664102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.664137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.664400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.664433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.664713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.664747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.664999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.665034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.665160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.665195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.665377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.665410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.665584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.665618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.665881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.665914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.666137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.666173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.666441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.666475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.666766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.666799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.666932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.666997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.667288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.667322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.667498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.667533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.667783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.667816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.668071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.668107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.668300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.668319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.668455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.668498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.668691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.668723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.669000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.669035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.669209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.669225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.669383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.669417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.669544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.669576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.669845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.669879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.670074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.670108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.902 qpair failed and we were unable to recover it. 00:28:40.902 [2024-11-26 07:38:08.670214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.902 [2024-11-26 07:38:08.670248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.670397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.670413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.670560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.670577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.670735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.670768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.670978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.671015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.671204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.671240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.671420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.671436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.671539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.671554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.671792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.671825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.671970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.672005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.672296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.672337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.672448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.672464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.672537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.672552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.672759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.672775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.672954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.672971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.673077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.673110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.673218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.673252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.673516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.673549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.673748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.673783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.674090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.674125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.674375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.674407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.674703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.674737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.674984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.675019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.675273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.675307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.675489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.675506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.675792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.675825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.676097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.676131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.676255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.676271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.676447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.676463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.676737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.676781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.677019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.677053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.677200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.677239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.677503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.677537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.677783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.677817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.678024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.678041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.678144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.678179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.678384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.678418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.678629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.903 [2024-11-26 07:38:08.678662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.903 qpair failed and we were unable to recover it. 00:28:40.903 [2024-11-26 07:38:08.678852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.678885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.679115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.679131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.679386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.679420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.679715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.679747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.679884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.679918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.680170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.680205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.680385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.680401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.680662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.680678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.680888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.680922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.681184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.681218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.681331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.681373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.681583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.681599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.681829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.681846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.682021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.682039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.682186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.682221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.682423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.682457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.682716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.682750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.683022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.683065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.683223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.683239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.683382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.683399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.683505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.683521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.683737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.683753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.683848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.683864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.684084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.684118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.684323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.684358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.684608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.684640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.684930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.684977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.685129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.685145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.685237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.685252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.685342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.685357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.685588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.685623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.685841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.685875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.686082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.686114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.686194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.686213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.686365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.686382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.686585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.686619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.686865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.686899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.687079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.904 [2024-11-26 07:38:08.687096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.904 qpair failed and we were unable to recover it. 00:28:40.904 [2024-11-26 07:38:08.687282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.687315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.687524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.687557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.687767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.687801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.688043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.688078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.688215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.688249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.688463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.688479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.688665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.688699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.688909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.688942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.689157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.689174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.689253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.689268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.689441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.689458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.689541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.689556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.689697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.689747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.689983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.690019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.690261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.690294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.690674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.690708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.691019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.691054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.691243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.691277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.691425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.691459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.691727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.691761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.691959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.691994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.692193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.692227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.692381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.692416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.692705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.692739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.693007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.693041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.693305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.693341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.693526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.693560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.693741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.693775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.694041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.694077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.694264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.694298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.694500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.694516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.694718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.694752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.694888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.694932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.695038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.695055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.695212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.695257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.695445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.695485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.695751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.695785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.696032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.905 [2024-11-26 07:38:08.696068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.905 qpair failed and we were unable to recover it. 00:28:40.905 [2024-11-26 07:38:08.696249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.696283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.696481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.696498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.696679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.696695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.696921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.696962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.697167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.697201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.697492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.697527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.697723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.697755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.697944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.697991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.698202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.698235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.698524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.698558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.698836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.698869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.699024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.699058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.699234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.699250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.699407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.699423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.699610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.699644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.699782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.699834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.700127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.700162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.700402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.700437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.700572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.700606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.700780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.700814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.701071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.701106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.701384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.701418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.701709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.701743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.702010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.702046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.702248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.702282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.702430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.702465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.702725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.702760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.702970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.703007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.703250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.703267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.703380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.703414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.703602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.703635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.703826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.703860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.704157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.906 [2024-11-26 07:38:08.704174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.906 qpair failed and we were unable to recover it. 00:28:40.906 [2024-11-26 07:38:08.704341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.704374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.704644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.704678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.704876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.704909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.705118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.705153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.705278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.705300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.705511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.705543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.705747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.705781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.705918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.705962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.706158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.706174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.706407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.706441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.706810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.706843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.707037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.707073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.707276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.707309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.707508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.707525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.707679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.707713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.707984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.708019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.708204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.708245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.708360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.708376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.708565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.708582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.708728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.708744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.708972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.709008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.709204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.709238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.709378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.709394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.709484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.709500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.709733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.709767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.710032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.710067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.710215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.710248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.710437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.710471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.710609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.710642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.710845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.710879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.711075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.711111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.711363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.711380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.711545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.711562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.711741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.711758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.711898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.711915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.712123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.712140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.712301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.712317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.712413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.907 [2024-11-26 07:38:08.712450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.907 qpair failed and we were unable to recover it. 00:28:40.907 [2024-11-26 07:38:08.712696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.712730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.712935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.712998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.713130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.713146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.713249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.713265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.713470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.713486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.713587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.713604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.713692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.713711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.713862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.713878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.714065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.714082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.714175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.714204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.714404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.714438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.714719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.714752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.715019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.715055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.715301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.715335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.715432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.715447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.715559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.715575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.715783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.715799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.715960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.715995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.716242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.716276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.716474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.716508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.716654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.716688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.716889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.716923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.717169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.717186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.717400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.717434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.717618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.717651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.717832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.717866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.718060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.718077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.718267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.718300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.718436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.718471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.718774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.718807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.719071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.719088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.719299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.719317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.719421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.719438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.719570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.719611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.719732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.719763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.719984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.720024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.720262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.720298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.720549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.908 [2024-11-26 07:38:08.720583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.908 qpair failed and we were unable to recover it. 00:28:40.908 [2024-11-26 07:38:08.720779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.720813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.721000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.721028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.721104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.721116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.721252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.721265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.721413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.721425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.721651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.721684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.722032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.722067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.722204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.722214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.722363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.722376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.722533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.722543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.722788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.722820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.722966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.722999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.723270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.723305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.723499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.723534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.723791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.723825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.724037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.724071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.724335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.724381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.724473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.724485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.725560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.725584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.725772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.725808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.726028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.726063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.726822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.726845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.727098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.727112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.727285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.727299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.728473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.728497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.728801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.728815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.729114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.729151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.729294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.729328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.729468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.729501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.729779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.729812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.730024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.730060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.730214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.730247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.730438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.730471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.730651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.730665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.730760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.730772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.730941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.730974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.731089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.731125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.909 [2024-11-26 07:38:08.731312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.909 [2024-11-26 07:38:08.731346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.909 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.731478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.731512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.731649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.731683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.731873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.731907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.732159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.732199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.732388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.732423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.732617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.732650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.732778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.732811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.732972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.733006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.733107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.733123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.733282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.733319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.733460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.733492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.733630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.733664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.733815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.733850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.734069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.734104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.734315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.734349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.734523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.734541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.734704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.734721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.734871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.734887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.735123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.735140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.735242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.735258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.735416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.735433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.735742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.735761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.735914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.735931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.736130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.736148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.736232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.736251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.736410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.736427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.736588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.736604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.736817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.736834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.736914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.736930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.737090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.737108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.737329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.737353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.737586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.737599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.737750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.737763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.737969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.737982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.738063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.738077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.738225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.738237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.738392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.738405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.910 [2024-11-26 07:38:08.738494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.910 [2024-11-26 07:38:08.738507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.910 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.738676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.738689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.738818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.738830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.738911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.738925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.739158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.739172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.739338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.739351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.739502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.739515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.739672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.739684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.739840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.739853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.739995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.740009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.740103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.740116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.740264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.740277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.740492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.740505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.740665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.740677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.740824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.740838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.741040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.741054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.741216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.741230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.741327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.741339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.741482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.741494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.741643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.741656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.741798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.741810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.741963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.741977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.742234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.742248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.742326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.742339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.742441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.742453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.742625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.742637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.742772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.742785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.742917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.742932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.743094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.743106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.743193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.743205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.743429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.743441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.743641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.743653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.743815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.743827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.743979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.911 [2024-11-26 07:38:08.743992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.911 qpair failed and we were unable to recover it. 00:28:40.911 [2024-11-26 07:38:08.744069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.744082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.744230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.744243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.744335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.744348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.744540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.744553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.744761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.744773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.744955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.744968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.745130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.745143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.745291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.745303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.745442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.745456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.745610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.745622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.745714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.745726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.745887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.745899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.746034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.746047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.746195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.746208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.746372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.746384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.746473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.746486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.746627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.746639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.746809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.746821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.746992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.747006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.747145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.747157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.747310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.747323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.747469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.747483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.747704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.747717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.747878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.747892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.748027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.748040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.748240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.748254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.748406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.748418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.748560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.748573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.748664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.748678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.748856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.748868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.749040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.749054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.749140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.749152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.749228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.749240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.749400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.749415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.749580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.749594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.749736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.749749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.749969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.749981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.750127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.912 [2024-11-26 07:38:08.750140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.912 qpair failed and we were unable to recover it. 00:28:40.912 [2024-11-26 07:38:08.750406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.750419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.750492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.750505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.750638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.750650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.750722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.750736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.750893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.750905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.751066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.751080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.751237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.751251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.751349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.751362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.751566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.751578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.751720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.751733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.751877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.751889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.752038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.752051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.752271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.752284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.752431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.752443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.752576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.752588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.752718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.752731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.752929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.752942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.753111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.753123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.753265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.753277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.753406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.753419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.753617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.753630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.753769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.753780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.753944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.753975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.754200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.754213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.754411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.754423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.754519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.754532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.754688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.754700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.754845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.754858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.754941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.754957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.755036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.755049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.755216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.755230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.755378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.755390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.755538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.755551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.755719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.755732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.755868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.755879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.756037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.756053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.756190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.756202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.756423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.913 [2024-11-26 07:38:08.756435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.913 qpair failed and we were unable to recover it. 00:28:40.913 [2024-11-26 07:38:08.756661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.756674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.756876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.756890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.757095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.757109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.757253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.757266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.757339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.757352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.757549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.757562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.757758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.757771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.758009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.758023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.758156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.758168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.758385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.758397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.758574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.758587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.758811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.758824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.758992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.759005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.759194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.759207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.759352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.759365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.759513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.759525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.759731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.759744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.759882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.759895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.760064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.760077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.760235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.760247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.760397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.760410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.760608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.760621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.760773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.760786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.760882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.760895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.761027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.761055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.761283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.761301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.761449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.761465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.761649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.761666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.761777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.761793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.761868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.761885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.762027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.762045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.762277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.762294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.762433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.762449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.762526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.762542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.762780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.762797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.762890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.762906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.763002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.763017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.914 [2024-11-26 07:38:08.763104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.914 [2024-11-26 07:38:08.763115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.914 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.763199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.763212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.763368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.763380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.763466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.763479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.763625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.763638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.763881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.763893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.764061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.764075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.764271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.764283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.764423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.764436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.764525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.764538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.764678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.764691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.764776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.764789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.765012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.765025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.765158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.765170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.765385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.765397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.765570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.765584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.765666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.765678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.765880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.765892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.765958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.765971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.766139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.766151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.766217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.766230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.766452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.766465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.766695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.766707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.766842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.766856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.766990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.767003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.767138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.767150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.767376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.767389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.767518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.767532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.767676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.767689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.767914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.767925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.768016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.768030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.768196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.768209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.768380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.768394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.768663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.768675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.768903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.915 [2024-11-26 07:38:08.768916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.915 qpair failed and we were unable to recover it. 00:28:40.915 [2024-11-26 07:38:08.769071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.769085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.769231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.769244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.769324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.769337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.769530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.769542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.769617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.769645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.769856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.769869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.770023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.770036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.770238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.770251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.770449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.770462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.770552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.770564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.770691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.770704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.770795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.770808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.770986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.770999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.771079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.771091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.771322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.771335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.771467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.771480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.771727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.771740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.771882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.771894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.772056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.772069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.772272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.772285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.772367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.772379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.772528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.772542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.772681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.772693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.772841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.772855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.772935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.772952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.773107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.773121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.773279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.773292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.773357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.773369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.773504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.773516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.773592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.773605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.773752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.773764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.773961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.773974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.774140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.774155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.774304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.774317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.774460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.774473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.774624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.774637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.916 [2024-11-26 07:38:08.774884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.916 [2024-11-26 07:38:08.774897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.916 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.775120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.775132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.775219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.775232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.775429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.775441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.775665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.775678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.775826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.775838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.776017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.776030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.776187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.776201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.776344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.776358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.776424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.776436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.776658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.776671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.776880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.776892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.777042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.777055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.777188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.777201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.777285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.777297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.777446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.777459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.777548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.777560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.777759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.777771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.778009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.778023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.778182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.778195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.778291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.778304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.778436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.778449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.778578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.778590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.778789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.778802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.778883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.778896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.779043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.779056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.779253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.779266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.779484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.779497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.779718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.779730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.779807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.779819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.779958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.779972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.780116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.780128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.780191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.780204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.780405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.780417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.780512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.780525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.780798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.780812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.780964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.780979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.781151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.917 [2024-11-26 07:38:08.781164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.917 qpair failed and we were unable to recover it. 00:28:40.917 [2024-11-26 07:38:08.781315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.781328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.781464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.781477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.781613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.781626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.781823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.781836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.781972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.781985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.782119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.782132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.782291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.782305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.782446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.782459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.782592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.782605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.782683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.782697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.782917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.782930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.783028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.783041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.783187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.783199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.783392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.783405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.783637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.783650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.783715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.783729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.783873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.783886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.784103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.784117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.784266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.784279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.784431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.784444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.784640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.784654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.784792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.784804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.784959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.784972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.785111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.785124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.785289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.785303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.785400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.785412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.785571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.785584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.785806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.785820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.785915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.785928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.786087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.786101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.786178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.786191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.786351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.786364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.786441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.786454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.786620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.786632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.786703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.786715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.786844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.786857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.787012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.787026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.787211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.918 [2024-11-26 07:38:08.787224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.918 qpair failed and we were unable to recover it. 00:28:40.918 [2024-11-26 07:38:08.787465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.787480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.787625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.787637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.787789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.787801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.787960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.787973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.788173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.788186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.788410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.788423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.788586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.788598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.788743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.788756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.788923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.788935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.789072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.789085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.789189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.789201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.789332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.789345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.789547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.789559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.789769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.789782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.789996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.790008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.790257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.790270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.790408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.790420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.790668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.790682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.790759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.790771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.790927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.790940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.791091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.791104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.791331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.791344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.791513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.791526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.791618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.791630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.791827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.791840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.791983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.791997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.792216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.792228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.792390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.792402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.792588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.792600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.792765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.792778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.792854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.792866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.792958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.792985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.793141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.793155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.793284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.793297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.793442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.793455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.793675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.793687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.793827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.793839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.919 [2024-11-26 07:38:08.793910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.919 [2024-11-26 07:38:08.793923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.919 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.794180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.794193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.794359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.794372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.794586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.794600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.794758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.794771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.794913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.794925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.795015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.795029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.795105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.795117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.795270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.795282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.795415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.795428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.795623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.795636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.795732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.795745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.795827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.795840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.795986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.796000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.796088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.796100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.796190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.796203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.796294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.796306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.796383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.796396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.796522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.796536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.796767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.796781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.796989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.797002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.797155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.797168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.797293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.797306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.797467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.797480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.797609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.797622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.797753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.797766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.797915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.797927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.798133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.798146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.798341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.798354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.798523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.798536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.798687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.798699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.798870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.798883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.799132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.799144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.799340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.799353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.920 [2024-11-26 07:38:08.799487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.920 [2024-11-26 07:38:08.799499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.920 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.799600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.799613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.799691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.799703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.799933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.799946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.800099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.800112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.800319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.800332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.800558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.800571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.800791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.800802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.801004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.801019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.801160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.801175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.801324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.801336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.801480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.801492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.801656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.801669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.801838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.801850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.802009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.802022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.802191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.802204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.802366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.802379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.802520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.802533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.802609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.802621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.802824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.802837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.802967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.802981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.803064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.803077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.803293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.803307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.803478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.803490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.803621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.803634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.803878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.803892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.804101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.804114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.804204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.804217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.804368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.804381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.804608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.804621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.804694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.804706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.804921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.804933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.805074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.805087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.805311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.805324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.805481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.805494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.805586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.805598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.805666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.805679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.805896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.805910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.921 [2024-11-26 07:38:08.806105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.921 [2024-11-26 07:38:08.806119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.921 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.806345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.806358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.806492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.806504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.806705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.806718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.806880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.806893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.807110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.807124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.807339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.807352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.807546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.807559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.807715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.807727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.807872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.807886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.807960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.807973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.808135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.808151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.808223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.808237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.808389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.808402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.808528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.808542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.808759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.808772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.808848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.808862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.808941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.808957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.809044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.809057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.809206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.809218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.809358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.809371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.809623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.809636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.809807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.809820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.810085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.810100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.810197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.810210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.810456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.810469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.810588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.810601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.810732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.810745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.810952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.810966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.811097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.811110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.811262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.811275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.811337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.811349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.811562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.811576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.811676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.811689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.811837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.811850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.811934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.811952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.812102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.812115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.812362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.812374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.922 [2024-11-26 07:38:08.812507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.922 [2024-11-26 07:38:08.812520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.922 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.812689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.812703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.812852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.812865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.813070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.813084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.813224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.813238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.813381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.813394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.813631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.813645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.813798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.813811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.814038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.814053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.814183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.814196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.814281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.814294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.814424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.814437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.814634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.814647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.814898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.814915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.815061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.815074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.815244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.815260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.815400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.815414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.815612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.815626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.815773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.815785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.815931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.815943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.816102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.816115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.816269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.816282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.816504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.816516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.816590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.816602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.816688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.816703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.816923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.816937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.817037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.817050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.817320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.817333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.817412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.817423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.817588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.817600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.817739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.817752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.817882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.817896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.818102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.818115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.818243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.818255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.818423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.818436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.818668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.818680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.818775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.818787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.818998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.819012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.923 qpair failed and we were unable to recover it. 00:28:40.923 [2024-11-26 07:38:08.819183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.923 [2024-11-26 07:38:08.819195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.819341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.819354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.819547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.819583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.819734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.819758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.819959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.819995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.820204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.820218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.820316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.820329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.820391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.820402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.820559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.820572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.820707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.820719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.820794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.820806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.821049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.821061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.821262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.821274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.821359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.821372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.821532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.821544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.821755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.821769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.821909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.821921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.822181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.822194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.822430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.822443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.822575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.822588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.822826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.822838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.822917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.822929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.823137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.823150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.823374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.823386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.823471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.823483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.823682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.823694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.823830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.823842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.823929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.823941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.824092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.824104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.824188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.824200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.824349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.824360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.824584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.824595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.824747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.824759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.824974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.824986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.825159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.825172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.825371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.825383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.825553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.825565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.825786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.924 [2024-11-26 07:38:08.825799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.924 qpair failed and we were unable to recover it. 00:28:40.924 [2024-11-26 07:38:08.826060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.826073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.826217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.826229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.826376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.826387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.826516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.826529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.826665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.826678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.826915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.826927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.827155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.827167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.827241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.827254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.827446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.827457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.827551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.827562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.827760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.827772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.827918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.827930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.828141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.828153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.828349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.828361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.828561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.828573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.828774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.828786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.828995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.829008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.829229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.829240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.829394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.829405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.829621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.829634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.829767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.829779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.829991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.830013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.830152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.830164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.830379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.830392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.830617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.830630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.830773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.830786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.830973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.830985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.831138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.831151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.831393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.831405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.831562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.831573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.831705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.831716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.831914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.831927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.832151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.832164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.925 [2024-11-26 07:38:08.832384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.925 [2024-11-26 07:38:08.832397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.925 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.832478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.832490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.832684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.832696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.832864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.832876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.832961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.832973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.833121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.833133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.833258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.833270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.833465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.833477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.833550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.833563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.833795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.833807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.834005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.834018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.834257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.834271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.834523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.834535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.834678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.834690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.834905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.834918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.835110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.835123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.835185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.835197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.835422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.835435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.835653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.835665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.835919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.835932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.836171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.836184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.836349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.836362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.836505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.836518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.836617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.836629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.836716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.836729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.836871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.836883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.837024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.837038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.837190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.837201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.837345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.837358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.837491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.837503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.837644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.837656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.837797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.837809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.837874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.837885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.838100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.838114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.838308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.838320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.838543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.838555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.838723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.838736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.838864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.926 [2024-11-26 07:38:08.838877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.926 qpair failed and we were unable to recover it. 00:28:40.926 [2024-11-26 07:38:08.839078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.839092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.839238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.839250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.839417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.839429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.839560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.839573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.839716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.839727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.839966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.839995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.840220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.840232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.840378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.840390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.840590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.840603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.840834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.840846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.841051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.841064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.841218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.841230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.841292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.841304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.841461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.841475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.841668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.841680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.841895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.841907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.842073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.842086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.842280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.842291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.842528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.842560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.842824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.842857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.843153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.843187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.843431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.843463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.843730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.843762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.843938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.843981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.844174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.844206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.844475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.844507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.844789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.844821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.845103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.845138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.845391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.845424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.845641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.845672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.845766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.845778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.846004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.846039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.846320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.846354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.846574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.846607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.846848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.846881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.847140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.847173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.847417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.847450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.847661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.927 [2024-11-26 07:38:08.847693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.927 qpair failed and we were unable to recover it. 00:28:40.927 [2024-11-26 07:38:08.847967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.848000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.848287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.848320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.848588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.848622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.848820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.848852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.849038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.849072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.849315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.849348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.849536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.849569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.849762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.849774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.849984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.850019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.850262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.850296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.850582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.850593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.850740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.850751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.850993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.851027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.851294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.851327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.851616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.851660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.851901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.851914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.852122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.852134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.852211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.852223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.852373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.852385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.852537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.852549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.852764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.852795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.853039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.853072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.853333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.853366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.853488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.853519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.853784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.853817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.853972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.854007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.854284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.854320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.854471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.854482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.854674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.854686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.854902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.854914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.854998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.855010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.855246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.855278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.855450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.855481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.855727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.855766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.855840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.855851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.856019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.856032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.856107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.856118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.928 [2024-11-26 07:38:08.856314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.928 [2024-11-26 07:38:08.856326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.928 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.856469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.856480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.856554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.856566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.856766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.856777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.856976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.857010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.857224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.857256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.857520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.857552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.857852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.857884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.858149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.858184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.858478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.858512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.858755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.858786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.858993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.859028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.859300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.859334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.859602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.859639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.859803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.859815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.859970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.860005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.860198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.860230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.860520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.860553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.860837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.860851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.861018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.861052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.861267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.861299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.861485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.861497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.861728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.861760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.861946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.861998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.862235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.862284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.862421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.862453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.862750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.862783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.863070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.863103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.863318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.863351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.863592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.863604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.863735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.863747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.863960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.863994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.864201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.864235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.929 [2024-11-26 07:38:08.864437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.929 [2024-11-26 07:38:08.864470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.929 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.864643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.864676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.864863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.864895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.865171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.865204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.865328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.865360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.865607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.865620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.865767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.865778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.865988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.866000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.866176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.866208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.866391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.866424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.866558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.866590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.866782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.866814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.867013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.867047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.867322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.867355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.867633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.867645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.867794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.867805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.868036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.868069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.868282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.868314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.868445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.868456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.868602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.868614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.868836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.868848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.869092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.869127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.869331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.869364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.869495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.869526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.869654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.869687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.869859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.869873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.870027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.870039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.870181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.870194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.870320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.870353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.870629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.870661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.870831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.870843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.870970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.870983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.871122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.871133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.871295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.871327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.871523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.871555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.871779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.871811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.872056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.872089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.872338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.872370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.930 [2024-11-26 07:38:08.872633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.930 [2024-11-26 07:38:08.872665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.930 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.872966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.873000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.873137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.873170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.873344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.873376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.873556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.873589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.873754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.873765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.873920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.873960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.874217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.874250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.874441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.874472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.874662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.874693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.874884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.874895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.875126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.875139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.875300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.875332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.875521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.875552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.875828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.875861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.875975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.876008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.876223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.876255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.876497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.876528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.876724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.876755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.877013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.877025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.877238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.877271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.877482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.877515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.877780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.877812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.878103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.878137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.878406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.878439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.878630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.878662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.878806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.878838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.879108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.879148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.879352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.879384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.879508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.879540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.879711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.879722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.879916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.879928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.880071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.880083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.880303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.880315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.880546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.880579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.880819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.880851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.881069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.881102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.881372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.881405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.931 [2024-11-26 07:38:08.881606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.931 [2024-11-26 07:38:08.881639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.931 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.881796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.881807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.881893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.881918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.882179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.882213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.882326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.882358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.882626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.882658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.882853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.882885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.883082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.883117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.883384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.883419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.883700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.883733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.884008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.884043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.884238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.884272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.884465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.884497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.884765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.884798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.885056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.885090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.885284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.885316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.885519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.885552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.885814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.885827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.885973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.885985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.886200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.886234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.886442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.886474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.886693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.886726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.887002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.887036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.887307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.887340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.887544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.887577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.887869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.887902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.888122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.888156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.888417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.888450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.888600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.888612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.888829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.888867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.889071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.889106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.889367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.889400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.889639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.889651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.889778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.889789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.889938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.889978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.890272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.890305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.890579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.890591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.890784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.932 [2024-11-26 07:38:08.890795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.932 qpair failed and we were unable to recover it. 00:28:40.932 [2024-11-26 07:38:08.891030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.891042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.891124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.891136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.891362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.891394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.891605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.891636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.891880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.891911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.892169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.892204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.892380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.892411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.892645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.892656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.892800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.892812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.893037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.893050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.893207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.893220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.893418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.893450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.893729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.893761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.894007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.894041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.894234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.894266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.894536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.894568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.894808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.894840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.895051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.895085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.895203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.895234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.895481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.895512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.895783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.895816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.896058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.896093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.896363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.896397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.896664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.896697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.896906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.896917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.897070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.897081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.897292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.897324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.897569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.897601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.897890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.897922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.898197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.898230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.898410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.898444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.898665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.898703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.898998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.899032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.899277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.899310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.899565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.899597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.899841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.899874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.900128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.933 [2024-11-26 07:38:08.900163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.933 qpair failed and we were unable to recover it. 00:28:40.933 [2024-11-26 07:38:08.900411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.900444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.900617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.900629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.900776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.900788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.900877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.900888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.901041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.901088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.901307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.901340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.901457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.901488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.901667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.901699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.901956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.901991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.902262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.902312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.902575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.902606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.902886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.902918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.903200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.903234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.903432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.903464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.903640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.903672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.903934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.903950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.904099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.904122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.904260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.904272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.904470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.904483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.904614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.904636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.904884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.904916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.905179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.905213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.905514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.905546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.905807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.905840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.906128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.906163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.906388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.906420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.906672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.906683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.906754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.906766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.906859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.906871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.907031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.907065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.907333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.907366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.907533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.907545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.934 [2024-11-26 07:38:08.907697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.934 [2024-11-26 07:38:08.907729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.934 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.907932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.907972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.908242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.908281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.908557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.908589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.908838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.908871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.909058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.909092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.909382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.909415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.909706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.909739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.909957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.909969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.910136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.910168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.910432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.910466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.910670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.910702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.910960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.910995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.911262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.911295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.911574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.911606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.911793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.911825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.912012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.912059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.912248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.912281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.912542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.912554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.912713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.912724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.912908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.912941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.913157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.913190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.913328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.913361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.913617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.913649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.913819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.913831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.913978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.913990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.914243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.914276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.914463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.914496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.914761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.914773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.914927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.914969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.915238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.915271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.915466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.915498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.915691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.915723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.915852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.915884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.916150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.916185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.916430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.916463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.916722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.916755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.916996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.917031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.935 [2024-11-26 07:38:08.917219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.935 [2024-11-26 07:38:08.917252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.935 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.917522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.917555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.917707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.917719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.917945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.918001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.918247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.918286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.918572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.918618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.918863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.918875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.919022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.919035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.919259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.919271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.919461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.919501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.919748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.919765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.919919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.919936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.920169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.920186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.920411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.920444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.920693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.920727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.920982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.921015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.921164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.921198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.921465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.921499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.921759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.921792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.921964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.921988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.922214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.922247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.922494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.922526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.922720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.922754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.923001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.923036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.923305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.923337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.923477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.923510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.923699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.923731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.923988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.924005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.924209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.924225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.924434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.924450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.924608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.924624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.924875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.924895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.925123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.925141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.925290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.925307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.925483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.925499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.925718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.925734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.925982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.926017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.926285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.926320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.936 [2024-11-26 07:38:08.926571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.936 [2024-11-26 07:38:08.926604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.936 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.926751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.926784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.926973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.927008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.927279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.927313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.927579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.927612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.927897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.927914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.928170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.928187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.928482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.928518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.928801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.928834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.929116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.929151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.929371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.929406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.929617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.929649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.929840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.929872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.930075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.930089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.930310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.930322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.930417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.930429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.930652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.930664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.930827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.930839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.931049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.931061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.931213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.931245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.931426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.931465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.931683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.931726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.931932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.931944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.932160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.932172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.932389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.932402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.932553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.932565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.932799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.932811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.932895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.932907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.933151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.933165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.933371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.933384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.933585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.933597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.933767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.933779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.933944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.933986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.934272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.934309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.934600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.934632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.934899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.934931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.935143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.935177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.935377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.935410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.937 [2024-11-26 07:38:08.935676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.937 [2024-11-26 07:38:08.935708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.937 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.935900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.935932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.936240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.936273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.936495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.936528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.936732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.936764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.936998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.937012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.937254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.937267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.937468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.937480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.937640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.937673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.937891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.937930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.938188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.938222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.938417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.938451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.938629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.938664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.938902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.938919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.939156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.939173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.939380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.939397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.939511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.939544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.939791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.939824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.940006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.940040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.940315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.940348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.940618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.940652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.940832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.940865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.941106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.941124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.941333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.941350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.941508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.941524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.941674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.941691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.941831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.941847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.942000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.942017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.942244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.942261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.942412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.942429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.942590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.942606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.942862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.942878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.942967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.942984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.943252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.938 [2024-11-26 07:38:08.943285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.938 qpair failed and we were unable to recover it. 00:28:40.938 [2024-11-26 07:38:08.943423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.943456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.943739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.943772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.943967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.944007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.944256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.944289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.944581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.944614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.944890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.944930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.945198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.945243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.945462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.945497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.945770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.945804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.945995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.946030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.946217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.946251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.946460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.946493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.946706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.946739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.946978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.946995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.947165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.947198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.947309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.947342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.947608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.947642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.947816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.947833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.947986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.948004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.948168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.948202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.948449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.948483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.948777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.948811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.948970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.949005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.949270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.949304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.949498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.949531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.949711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.949743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.950004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.950021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.950127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.950161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.950370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.950403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.950619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.950658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.950908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.950941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.951088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.951104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.951326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.951359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.951604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.951636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.951854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.951888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.952100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.952135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.952314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.952347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.952537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.952569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.952766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.952799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.953040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.953057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.953298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.953333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.939 [2024-11-26 07:38:08.953538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.939 [2024-11-26 07:38:08.953572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.939 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.953760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.953803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.954018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.954035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.954265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.954282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.954512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.954529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.954773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.954789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.954945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.954977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.955128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.955145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.955409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.955443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.955625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.955659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.955923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.955963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.956151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.956167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.956310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.956326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.956559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.956575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.956685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.956702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.956916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.956993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.957193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.957226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.957452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.957486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.957781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.957815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.958079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.958115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.958260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.958294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.958425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.958458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.958657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.958690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.958935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.958979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.959225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.959260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.959533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.959566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.959770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.959804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.960098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.960138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.960323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.960356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.960634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.960668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.960935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.960958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.961195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.961227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.961504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.961538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.961820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.961854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.962035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.962070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.962262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.962279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.962487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.962504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.962648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.962665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.962907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.962924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.963081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.963098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.963327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.963343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.940 qpair failed and we were unable to recover it. 00:28:40.940 [2024-11-26 07:38:08.963437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.940 [2024-11-26 07:38:08.963454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.963534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.963550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.963790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.963807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.963964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.963982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.964142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.964159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.964317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.964334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.964483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.964516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.964660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.964693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.964988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.965024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.965306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.965323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.965538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.965554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.965709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.965726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.965961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.965978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.966158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.966175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.966323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.966340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.966524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.966541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.966787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.966821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.967022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.967056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.967197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.967230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.967510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.967543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:40.941 [2024-11-26 07:38:08.967723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.941 [2024-11-26 07:38:08.967755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:40.941 qpair failed and we were unable to recover it. 00:28:41.220 [2024-11-26 07:38:08.968042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-11-26 07:38:08.968060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.220 qpair failed and we were unable to recover it. 00:28:41.220 [2024-11-26 07:38:08.968214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-11-26 07:38:08.968230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.220 qpair failed and we were unable to recover it. 00:28:41.220 [2024-11-26 07:38:08.968462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-11-26 07:38:08.968478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.220 qpair failed and we were unable to recover it. 00:28:41.220 [2024-11-26 07:38:08.968576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-11-26 07:38:08.968593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.220 qpair failed and we were unable to recover it. 00:28:41.220 [2024-11-26 07:38:08.968681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-11-26 07:38:08.968697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.220 qpair failed and we were unable to recover it. 00:28:41.220 [2024-11-26 07:38:08.968907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-11-26 07:38:08.968923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.220 qpair failed and we were unable to recover it. 00:28:41.220 [2024-11-26 07:38:08.969114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-11-26 07:38:08.969131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.220 qpair failed and we were unable to recover it. 00:28:41.220 [2024-11-26 07:38:08.969323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-11-26 07:38:08.969340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.220 qpair failed and we were unable to recover it. 00:28:41.220 [2024-11-26 07:38:08.969558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-11-26 07:38:08.969574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.220 qpair failed and we were unable to recover it. 00:28:41.220 [2024-11-26 07:38:08.969727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.220 [2024-11-26 07:38:08.969743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.220 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.969967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.969984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.970194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.970211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.970311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.970328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.970538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.970554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.970641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.970657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.970864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.970880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.971037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.971055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.971289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.971305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.971455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.971471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.971590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.971606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.971828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.971844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.972009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.972029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.972237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.972253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.972354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.972378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.972523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.972539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.972830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.972847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.973072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.973089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.973299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.973315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.973569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.973603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.973851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.973884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.974077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.974112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.974248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.974281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.974470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.974503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.974637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.974671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.974916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.974960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.975240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.975273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.975527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.975560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.975814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.975846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.976123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.976158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.976295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.976328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.976479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.976512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.976693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.976727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.976996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.977013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.977178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.977194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.977417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.977450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.977630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.977663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.977853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.221 [2024-11-26 07:38:08.977886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.221 qpair failed and we were unable to recover it. 00:28:41.221 [2024-11-26 07:38:08.978187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.978221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.978414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.978452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.978669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.978702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.978968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.979003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.979223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.979256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.979504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.979538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.979730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.979764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.979955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.979971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.980159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.980192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.980467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.980499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.980785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.980818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.981006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.981040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.981308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.981341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.981579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.981613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.981787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.981804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.981990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.982024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.982275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.982307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.982509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.982542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.982757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.982789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.983063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.983096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.983280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.983296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.983449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.983485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.983713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.983747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.984019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.984054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.984311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.984328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.984430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.984446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.984628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.984662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.984971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.985005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.985263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.985296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.985436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.985469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.985672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.985706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.985923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.985987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.986234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.986268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.986541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.986576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.986782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.986816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.986932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.986956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.987216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.987234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.987404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.222 [2024-11-26 07:38:08.987423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.222 qpair failed and we were unable to recover it. 00:28:41.222 [2024-11-26 07:38:08.987658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.987691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.987879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.987896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.988095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.988130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.988398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.988431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.988716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.988752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.989060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.989096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.989278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.989313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.989452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.989485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.989794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.989827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.990045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.990062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.990245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.990278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.990477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.990511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.990706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.990739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.991007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.991024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.991242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.991276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.991484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.991517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.991785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.991819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.992032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.992049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.992206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.992240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.992381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.992415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.992613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.992647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.992829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.992863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.993142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.993177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.993453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.993486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.993710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.993743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.993921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.993982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.994178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.994211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.994437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.994470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.994768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.994801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.995085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.995120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.995385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.995418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.995612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.995653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.995911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.995944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.996163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.996181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.996357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.996390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.996609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.996641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.996911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.996964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.997175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.223 [2024-11-26 07:38:08.997192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.223 qpair failed and we were unable to recover it. 00:28:41.223 [2024-11-26 07:38:08.997431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.997463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:08.997692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.997726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:08.997917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.997964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:08.998124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.998142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:08.998373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.998406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:08.998633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.998667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:08.998851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.998884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:08.999086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.999121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:08.999239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.999272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:08.999525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.999559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:08.999742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.999777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:08.999905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:08.999923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.000173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.000190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.000360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.000396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.000592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.000625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.000840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.000874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.001018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.001054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.001250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.001285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.001551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.001584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.001793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.001827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.002098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.002144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.002246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.002264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.002416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.002460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.002688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.002722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.002918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.002962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.003143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.003161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.003354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.003387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.003636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.003669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.003800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.003833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.003975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.004010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.004187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.004203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.004421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.004437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.004690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.004723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.005027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.005061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.005246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.005263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.005431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.005465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.005787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.005820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.006043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.006079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.224 qpair failed and we were unable to recover it. 00:28:41.224 [2024-11-26 07:38:09.006233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.224 [2024-11-26 07:38:09.006249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.006422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.006455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.006565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.006599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.006852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.006885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.007015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.007033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.007130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.007147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.007359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.007375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.007639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.007672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.007786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.007819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.008033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.008074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.008301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.008318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.008531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.008548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.008731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.008748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.008837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.008854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.009058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.009093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.009387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.009422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.009679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.009713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.009939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.009983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.010210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.010227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.010384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.010403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.010601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.010634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.010789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.010823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.011106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.011141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.011362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.011380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.011614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.011630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.011773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.011790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.011901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.011918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.012065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.012083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.012208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.012241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.012428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.012461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.012671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.012715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.012901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.225 [2024-11-26 07:38:09.012918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.225 qpair failed and we were unable to recover it. 00:28:41.225 [2024-11-26 07:38:09.013053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.013088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.013314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.013347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.013496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.013530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.013732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.013766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.014067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.014103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.014349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.014383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.014691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.014723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.014915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.014958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.015161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.015194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.015335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.015368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.015644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.015677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.015882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.015915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.016203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.016237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.016439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.016473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.016674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.016708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.016970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.016988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.017173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.017190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.017411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.017445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.017696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.017776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.018019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.018056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.018257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.018271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.018383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.018396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.018563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.018597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.018811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.018845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.019098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.019133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.019337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.019370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.019566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.019600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.019879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.019914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.020063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.020097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.020385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.020419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.020654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.020688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.020958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.021002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.021276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.021318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.021508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.021541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.021730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.021763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.021943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.021960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.022214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.022226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.022383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.022396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.226 qpair failed and we were unable to recover it. 00:28:41.226 [2024-11-26 07:38:09.022551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.226 [2024-11-26 07:38:09.022584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.022700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.022733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.022914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.022958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.023195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.023229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.023453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.023486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.023685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.023719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.024017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.024052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.024337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.024349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.024585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.024598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.024844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.024878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.025158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.025191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.025408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.025441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.025695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.025729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.026025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.026060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.026259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.026293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.026490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.026524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.026709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.026742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.027004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.027017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.027200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.027213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.027304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.027336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.027554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.027593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.027821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.027855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.027984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.027997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.028229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.028263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.028541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.028574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.028801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.028835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.029077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.029091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.029228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.029241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.029417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.029449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.029658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.029690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.029967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.030001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.030272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.030305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.030435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.030468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.030714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.030747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.031014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.031063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.031208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.031221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.031392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.031426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.031645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.031678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.227 qpair failed and we were unable to recover it. 00:28:41.227 [2024-11-26 07:38:09.031863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.227 [2024-11-26 07:38:09.031896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.032093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.032106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.032327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.032362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.032548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.032580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.032874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.032908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.033082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.033118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.033374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.033407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.033701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.033734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.034003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.034016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.034254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.034288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.034428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.034462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.034734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.034767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.035024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.035059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.035287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.035320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.035502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.035535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.035832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.035865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.036173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.036208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.036419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.036453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.036653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.036686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.036960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.036973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.037051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.037064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.037284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.037317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.037591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.037631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.037905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.037917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.038085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.038098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.038322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.038335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.038515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.038527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.038608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.038620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.038699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.038711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.038861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.038895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.039130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.039165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.039373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.039406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.039611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.039644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.039912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.039946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.040090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.040122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.040309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.040323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.040484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.040518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.040716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.228 [2024-11-26 07:38:09.040749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.228 qpair failed and we were unable to recover it. 00:28:41.228 [2024-11-26 07:38:09.041015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.041050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.041349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.041382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.041652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.041686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.041981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.042015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.042151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.042164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.042301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.042313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.042523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.042536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.042736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.042748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.042899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.042933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.043212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.043245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.043520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.043553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.043843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.043877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.044085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.044099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.044242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.044255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.044467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.044480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.044709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.044721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.044872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.044884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.045123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.045137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.045386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.045418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.045597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.045631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.045819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.045852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.046028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.046042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.046204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.046236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.046521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.046554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.046850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.046890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.047171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.047205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.047386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.047419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.047694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.047728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.048004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.048038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.048221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.048234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.048502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.048535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.048832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.048864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.229 qpair failed and we were unable to recover it. 00:28:41.229 [2024-11-26 07:38:09.049153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.229 [2024-11-26 07:38:09.049166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.049320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.049354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.049626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.049659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.049879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.049918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.050157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.050170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.050249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.050260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.050474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.050508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.050620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.050652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.050932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.050993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.051203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.051215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.051387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.051421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.051606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.051639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.051830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.051864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.052135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.052170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.052359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.052392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.052585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.052618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.052915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.052960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.053082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.053094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.053301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.053335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.053615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.053649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.053865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.053897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.054106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.054140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.054359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.054393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.054692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.054724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.054919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.054973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.055225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.055259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.055449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.055461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.055703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.055736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.055928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.055971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.056166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.056199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.056423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.056435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.056657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.056669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.056911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.056944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.230 qpair failed and we were unable to recover it. 00:28:41.230 [2024-11-26 07:38:09.057091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.230 [2024-11-26 07:38:09.057124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.057325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.057357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.057483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.057515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.057696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.057729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.058001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.058014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.058244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.058277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.058474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.058507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.058754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.058787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.058966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.058979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.059117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.059152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.059423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.059455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.059655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.059688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.059940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.059983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.060244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.060277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.060522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.060535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.060672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.060685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.060824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.060857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.061037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.061072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.061342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.061374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.061599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.061632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.061879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.061911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.062120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.062154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.062420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.062433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.062670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.062704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.062882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.062915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.063188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.063222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.063408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.063421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.063569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.063581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.063848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.063880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.064064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.064099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.064378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.064390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.064535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.064568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.064850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.064883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.065083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.231 [2024-11-26 07:38:09.065096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.231 qpair failed and we were unable to recover it. 00:28:41.231 [2024-11-26 07:38:09.065323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.065357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.065536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.065570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.065847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.065880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.066089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.066102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.066316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.066349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.066489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.066527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.066799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.066831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.067014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.067027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.067136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.067150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.067384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.067396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.067551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.067564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.067661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.067697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.067992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.068028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.068295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.068329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.068452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.068485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.068755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.068788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.068924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.068936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.069087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.069100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.069247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.069284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.069548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.069581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.069872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.069906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.070199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.070234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.070454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.070487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.070734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.070768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.071024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.071037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.071267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.071279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.071436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.071470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.071766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.071800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.072054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.072068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.072215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.072227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.072404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.072437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.072626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.072660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.072977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.073011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.073243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.073277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.073474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.073508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.073707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.073739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.073989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.074024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.232 [2024-11-26 07:38:09.074240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.232 [2024-11-26 07:38:09.074273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.232 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.074453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.074465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.074695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.074728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.075000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.075035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.075249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.075283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.075545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.075558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.075660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.075693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.075836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.075868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.076119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.076170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.076418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.076431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.076531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.076543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.076791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.076803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.076986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.077020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.077225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.077257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.077527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.077560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.077829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.077862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.077984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.078013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.078259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.078272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.078405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.078418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.078503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.078515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.078668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.078681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.078888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.078900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.079034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.079047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.079184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.079196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.079347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.079360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.079581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.079614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.079801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.079834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.080085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.080119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.080297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.080331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.080524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.080557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.080834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.080873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.080967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.080980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.081210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.081244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.081380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.081412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.081727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.081761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.081968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.082003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.233 [2024-11-26 07:38:09.082183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.233 [2024-11-26 07:38:09.082216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.233 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.082478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.082490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.082727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.082762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.082938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.082958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.083127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.083160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.083407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.083441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.083627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.083660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.083911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.083944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.084232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.084265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.084519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.084531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.084681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.084693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.084903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.084935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.085195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.085236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.085487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.085520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.085835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.085868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.086161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.086197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.086474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.086508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.086792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.086825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.087036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.087071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.087271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.087304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.087566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.087600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.087917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.087960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.088234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.088268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.088544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.088577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.088863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.088896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.089116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.089151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.089343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.089377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.089652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.089685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.089945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.089992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.090250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.090283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.090554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.090586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.234 qpair failed and we were unable to recover it. 00:28:41.234 [2024-11-26 07:38:09.090771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.234 [2024-11-26 07:38:09.090805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.091068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.091081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.091230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.091243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.091406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.091439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.091569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.091603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.091803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.091836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.092161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.092197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.092505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.092538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.092797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.092830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.093082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.093116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.093387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.093421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.093673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.093706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.094000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.094035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.094245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.094257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.094493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.094526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.094720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.094753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.095027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.095041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.095265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.095277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.095539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.095572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.095791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.095825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.096089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.096123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.096249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.096275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.096514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.096528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.096701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.096714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.096875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.096888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.097120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.097134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.097300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.097313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.097463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.097476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.097555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.097569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.097720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.097733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.097916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.097929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.098033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.098046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.098182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.098195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.098291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.098324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.098518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.098551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.098809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.098843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.099096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.099131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.099387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.235 [2024-11-26 07:38:09.099421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.235 qpair failed and we were unable to recover it. 00:28:41.235 [2024-11-26 07:38:09.099688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.099722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.099925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.099967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.100243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.100276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.100458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.100491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.100765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.100798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.101071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.101105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.101393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.101427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.101652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.101665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.101873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.101886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.102135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.102170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.102295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.102329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.102551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.102583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.102878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.102890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.103065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.103078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.103306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.103318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.103400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.103412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.103557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.103590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.103795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.103828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.104039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.104074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.104277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.104310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.104565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.104598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.104902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.104937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.105191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.105204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.105437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.105451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.105612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.105624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.105839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.105852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.106120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.106155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.106355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.106388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.106572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.106604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.106807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.106840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.107089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.107103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.107318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.107353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.107555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.107588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.236 qpair failed and we were unable to recover it. 00:28:41.236 [2024-11-26 07:38:09.107859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.236 [2024-11-26 07:38:09.107892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.108183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.108197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.108456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.108489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.108744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.108779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.108985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.109020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.109213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.109225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.109328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.109340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.109546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.109559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.109707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.109719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.109927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.109940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.110179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.110193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.110427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.110440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.110595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.110607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.110805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.110818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.110978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.110992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.111234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.111266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.111575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.111608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.111813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.111847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.112119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.112133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.112260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.112291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.112485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.112518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.112783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.112816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.113114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.113149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.113288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.113320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.113516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.113529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.113742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.113755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.113940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.113958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.114159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.114191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.114484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.114518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.114723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.114756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.114892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.114932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.115139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.115174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.115438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.115471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.115725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.115759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.116074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.116087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.116306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.116319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.116405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.116420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.116661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.116694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.237 qpair failed and we were unable to recover it. 00:28:41.237 [2024-11-26 07:38:09.116826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.237 [2024-11-26 07:38:09.116859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.117067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.117101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.117288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.117302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.117541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.117575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.117702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.117736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.117994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.118036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.118287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.118323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.118630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.118663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.118957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.118992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.119262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.119297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.119607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.119641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.119930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.119974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.120237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.120251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.120461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.120474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.120623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.120636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.120880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.120913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.121289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.121370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.121675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.121713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.122006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.122041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.122308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.122352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.122522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.122539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.122726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.122760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.123003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.123038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.123238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.123270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.123576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.123609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.123823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.123856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.124074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.124110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.124328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.124344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.124563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.124580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.124755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.124773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.125006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.125040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.125319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.125351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.125546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.125564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.125725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.125742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.125961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.125979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.126221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.126237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.126520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.126554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.126858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.126891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.238 [2024-11-26 07:38:09.127097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.238 [2024-11-26 07:38:09.127116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.238 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.127306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.127324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.127487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.127503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.127701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.127718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.127897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.127931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.128082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.128100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.128369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.128402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.128551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.128585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.128866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.128905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.129202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.129238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.129447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.129481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.129739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.129774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.130003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.130038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.130312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.130346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.130555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.130589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.130869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.130902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.131211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.131246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.131433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.131468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.131655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.131689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.131956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.131991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.132201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.132236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.132516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.132533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.132711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.132729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.132980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.133014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.133304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.133321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.133550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.133567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.133718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.133734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.133977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.133995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.134179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.134196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.134403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.134437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.134715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.134749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.134878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.134911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.135110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.135127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.135353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.239 [2024-11-26 07:38:09.135387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.239 qpair failed and we were unable to recover it. 00:28:41.239 [2024-11-26 07:38:09.135643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.135676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.135986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.136030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.136272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.136289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.136535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.136553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.136847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.136880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.137167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.137203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.137476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.137509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.137796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.137830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.138113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.138150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.138295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.138330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.138526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.138544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.138789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.138822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.139088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.139124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.139263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.139280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.139458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.139491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.139823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.139858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.140144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.140179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.140368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.140403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.140601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.140636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.140913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.140965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.141186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.141203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.141430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.141447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.141614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.141631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.141788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.141822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.142059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.142115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.142313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.142355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.142463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.142480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.142723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.142757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.143054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.143089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.143430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.143465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.143658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.143692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.144000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.144035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.144271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.144306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.144586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.144619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.144928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.144981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.145216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.145250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.145550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.145566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.145784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.240 [2024-11-26 07:38:09.145801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.240 qpair failed and we were unable to recover it. 00:28:41.240 [2024-11-26 07:38:09.146023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.146042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.146208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.146225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.146447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.146481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.146669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.146703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.146903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.146937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.147227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.147263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.147527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.147543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.147757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.147773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.148001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.148036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.148226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.148243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.148433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.148466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.148692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.148725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.149003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.149045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.149197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.149213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.149383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.149417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.149673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.149707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.150005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.150040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.150302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.150319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.150540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.150557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.150778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.150811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.151020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.151055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.151318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.151334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.151559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.151592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.151794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.151827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.152031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.152066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.152270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.152287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.152529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.152563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.152760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.152793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.153012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.153047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.153259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.153276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.153468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.153502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.153711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.153751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.153905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.153938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.154146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.154180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.154462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.154498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.154804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.154838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.155043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.155078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.155222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.155255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.241 [2024-11-26 07:38:09.155392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.241 [2024-11-26 07:38:09.155409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.241 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.155706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.155739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.155937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.155997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.156202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.156236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.156547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.156564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.156813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.156830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.156978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.156996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.157245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.157262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.157561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.157595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.157742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.157776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.158059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.158095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.158224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.158241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.158507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.158540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.158744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.158777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.158972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.159007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.159262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.159296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.159415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.159432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.159600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.159617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.159861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.159878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.160133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.160150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.160381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.160401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.160511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.160529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.160699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.160732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.161016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.161051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.161333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.161367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.161652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.161686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.161916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.161961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.162177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.162210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.162397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.162431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.162715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.162748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.162895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.162929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.163142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.163159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.163340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.163374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.163592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.163628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.163918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.163963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.164095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.164128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.164416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.164451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.164749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.164766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.164988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.242 [2024-11-26 07:38:09.165006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.242 qpair failed and we were unable to recover it. 00:28:41.242 [2024-11-26 07:38:09.165247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.165264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.165426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.165443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.165611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.165628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.165880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.165913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.166131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.166166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.166445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.166478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.166785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.166832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.167066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.167101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.167289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.167329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.167538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.167554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.167711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.167745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.167971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.168007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.168217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.168258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.168502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.168520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.168707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.168724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.168845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.168863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.169030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.169066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.169217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.169250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.169443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.169477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.169669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.169687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.169769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.169785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.169944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.169968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.170194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.170228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.170366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.170399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.170740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.170774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.170974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.171009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.171235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.171269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.171525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.171542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.171785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.171802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.171954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.171972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.172067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.172084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.172251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.172268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.172428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.172445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.172669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.172703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.173008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.173044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.173328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.173361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.173607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.173640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.173922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.243 [2024-11-26 07:38:09.173963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.243 qpair failed and we were unable to recover it. 00:28:41.243 [2024-11-26 07:38:09.174251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.174286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.174478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.174495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.174735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.174752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.175004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.175039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.175343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.175376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.175579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.175597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.175761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.175795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.176053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.176087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.176289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.176323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.176596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.176629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.176888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.176922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.177233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.177268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.177528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.177545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.177736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.177752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.177904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.177921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.178198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.178233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.178432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.178465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.178758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.178774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.178939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.178963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.179191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.179224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.179409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.179443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.179628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.179660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.179915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.179960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.180167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.180200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.180462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.180496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.180791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.180824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.180963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.180998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.181135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.181169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.181451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.181485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.181768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.181801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.182110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.182146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.182352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.182386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.182666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.182700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.182902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.244 [2024-11-26 07:38:09.182936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.244 qpair failed and we were unable to recover it. 00:28:41.244 [2024-11-26 07:38:09.183167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.183201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.183534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.183569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.183793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.183826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.184079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.184115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.184437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.184476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.184729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.184764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.185055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.185091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.185363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.185398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.185683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.185717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.186000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.186036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.186320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.186354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.186640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.186674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.186973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.187009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.187165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.187198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.187386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.187403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.187628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.187662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.187803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.187836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.188114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.188157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.188419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.188461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.188747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.188782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.189058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.189100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.189319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.189337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.189578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.189595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.189862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.189879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.190030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.190048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.190149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.190167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.190327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.190344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.190523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.190540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.190707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.190725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.190975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.191009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.191216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.191250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.191504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.191524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.191749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.191766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.192010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.192028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.192193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.192210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.192331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.192365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.192567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.192601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.245 [2024-11-26 07:38:09.192878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.245 [2024-11-26 07:38:09.192912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.245 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.193143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.193178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.193394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.193427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.193726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.193744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.193892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.193909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.194153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.194171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.194334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.194351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.194507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.194544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.194756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.194789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.194997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.195032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.195165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.195182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.195344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.195361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.195477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.195494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.195660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.195694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.195959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.195995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.196278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.196313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.196582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.196616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.196914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.196957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.197223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.197256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.197515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.197548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.197693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.197726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.198030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.198065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.198241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.198259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.198426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.198443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.198666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.198699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.198933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.198979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.199284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.199327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.199560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.199577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.199783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.199799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.199899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.199915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.200119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.200154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.200355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.200388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.200629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.200662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.200922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.200964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.201179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.201196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.201369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.201386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.201572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.201588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.201818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.201835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.202094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.246 [2024-11-26 07:38:09.202129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.246 qpair failed and we were unable to recover it. 00:28:41.246 [2024-11-26 07:38:09.202337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.202354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.202598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.202632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.202840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.202873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.203131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.203166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.203451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.203468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.203680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.203697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.203921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.203966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.204263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.204297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.204601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.204634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.204896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.204929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.205234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.205269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.205538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.205572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.205796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.205830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.206034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.206071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.206285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.206303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.206470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.206503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.206710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.206744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.206940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.206983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.207193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.207225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.207534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.207567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.207846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.207863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.208115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.208132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.208370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.208387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.208488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.208512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.247 qpair failed and we were unable to recover it. 00:28:41.247 [2024-11-26 07:38:09.208749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.247 [2024-11-26 07:38:09.208766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.248 qpair failed and we were unable to recover it. 00:28:41.248 [2024-11-26 07:38:09.208957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-11-26 07:38:09.208991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.248 qpair failed and we were unable to recover it. 00:28:41.248 [2024-11-26 07:38:09.209274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-11-26 07:38:09.209317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.248 qpair failed and we were unable to recover it. 00:28:41.248 [2024-11-26 07:38:09.209486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-11-26 07:38:09.209503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.248 qpair failed and we were unable to recover it. 00:28:41.248 [2024-11-26 07:38:09.209727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-11-26 07:38:09.209761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.248 qpair failed and we were unable to recover it. 00:28:41.248 [2024-11-26 07:38:09.210039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-11-26 07:38:09.210075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.248 qpair failed and we were unable to recover it. 00:28:41.248 [2024-11-26 07:38:09.210364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-11-26 07:38:09.210398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.248 qpair failed and we were unable to recover it. 00:28:41.248 [2024-11-26 07:38:09.210620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-11-26 07:38:09.210654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.248 qpair failed and we were unable to recover it. 00:28:41.248 [2024-11-26 07:38:09.210968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-11-26 07:38:09.211004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.249 qpair failed and we were unable to recover it. 00:28:41.249 [2024-11-26 07:38:09.211283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-11-26 07:38:09.211318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.249 qpair failed and we were unable to recover it. 00:28:41.249 [2024-11-26 07:38:09.211638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-11-26 07:38:09.211656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.249 qpair failed and we were unable to recover it. 00:28:41.249 [2024-11-26 07:38:09.211890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-11-26 07:38:09.211924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.249 qpair failed and we were unable to recover it. 00:28:41.249 [2024-11-26 07:38:09.212204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-11-26 07:38:09.212239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.249 qpair failed and we were unable to recover it. 00:28:41.249 [2024-11-26 07:38:09.212395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-11-26 07:38:09.212430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.249 qpair failed and we were unable to recover it. 00:28:41.249 [2024-11-26 07:38:09.212629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-11-26 07:38:09.212647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.249 qpair failed and we were unable to recover it. 00:28:41.249 [2024-11-26 07:38:09.212865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-11-26 07:38:09.212882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.249 qpair failed and we were unable to recover it. 00:28:41.249 [2024-11-26 07:38:09.213044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-11-26 07:38:09.213078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.249 qpair failed and we were unable to recover it. 00:28:41.249 [2024-11-26 07:38:09.213284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.213317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.213615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.213632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.213747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.213764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.214001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.214039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.214164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.214198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.214429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.214463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.214749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.214783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.215064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.215098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.215261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.215294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.215576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.215617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.215839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.215856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.216094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.216113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.250 qpair failed and we were unable to recover it. 00:28:41.250 [2024-11-26 07:38:09.216279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.250 [2024-11-26 07:38:09.216317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.251 qpair failed and we were unable to recover it. 00:28:41.251 [2024-11-26 07:38:09.216613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.251 [2024-11-26 07:38:09.216648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.251 qpair failed and we were unable to recover it. 00:28:41.251 [2024-11-26 07:38:09.216856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.251 [2024-11-26 07:38:09.216892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.251 qpair failed and we were unable to recover it. 00:28:41.251 [2024-11-26 07:38:09.217180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.251 [2024-11-26 07:38:09.217217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.251 qpair failed and we were unable to recover it. 00:28:41.251 [2024-11-26 07:38:09.217416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.251 [2024-11-26 07:38:09.217437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.251 qpair failed and we were unable to recover it. 00:28:41.251 [2024-11-26 07:38:09.217711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.251 [2024-11-26 07:38:09.217744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.251 qpair failed and we were unable to recover it. 00:28:41.251 [2024-11-26 07:38:09.218038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.251 [2024-11-26 07:38:09.218073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.251 qpair failed and we were unable to recover it. 00:28:41.251 [2024-11-26 07:38:09.218301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.251 [2024-11-26 07:38:09.218335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.251 qpair failed and we were unable to recover it. 00:28:41.251 [2024-11-26 07:38:09.218594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.252 [2024-11-26 07:38:09.218639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.252 qpair failed and we were unable to recover it. 00:28:41.252 [2024-11-26 07:38:09.218861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.252 [2024-11-26 07:38:09.218879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.252 qpair failed and we were unable to recover it. 00:28:41.252 [2024-11-26 07:38:09.219051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.252 [2024-11-26 07:38:09.219069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.252 qpair failed and we were unable to recover it. 00:28:41.252 [2024-11-26 07:38:09.219177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.252 [2024-11-26 07:38:09.219212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.252 qpair failed and we were unable to recover it. 00:28:41.252 [2024-11-26 07:38:09.219513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.252 [2024-11-26 07:38:09.219545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.252 qpair failed and we were unable to recover it. 00:28:41.252 [2024-11-26 07:38:09.219806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.252 [2024-11-26 07:38:09.219840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.253 qpair failed and we were unable to recover it. 00:28:41.253 [2024-11-26 07:38:09.220049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.253 [2024-11-26 07:38:09.220085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.253 qpair failed and we were unable to recover it. 00:28:41.253 [2024-11-26 07:38:09.220273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.253 [2024-11-26 07:38:09.220306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.253 qpair failed and we were unable to recover it. 00:28:41.253 [2024-11-26 07:38:09.220580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.253 [2024-11-26 07:38:09.220598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.253 qpair failed and we were unable to recover it. 00:28:41.253 [2024-11-26 07:38:09.220768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.253 [2024-11-26 07:38:09.220786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.253 qpair failed and we were unable to recover it. 00:28:41.254 [2024-11-26 07:38:09.220964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.254 [2024-11-26 07:38:09.221000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.254 qpair failed and we were unable to recover it. 00:28:41.254 [2024-11-26 07:38:09.221253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.254 [2024-11-26 07:38:09.221288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.254 qpair failed and we were unable to recover it. 00:28:41.254 [2024-11-26 07:38:09.221595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.254 [2024-11-26 07:38:09.221629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.254 qpair failed and we were unable to recover it. 00:28:41.254 [2024-11-26 07:38:09.221902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.254 [2024-11-26 07:38:09.221920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.254 qpair failed and we were unable to recover it. 00:28:41.254 [2024-11-26 07:38:09.222103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.254 [2024-11-26 07:38:09.222122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.254 qpair failed and we were unable to recover it. 00:28:41.254 [2024-11-26 07:38:09.222281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.254 [2024-11-26 07:38:09.222299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.254 qpair failed and we were unable to recover it. 00:28:41.255 [2024-11-26 07:38:09.222540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.255 [2024-11-26 07:38:09.222561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.255 qpair failed and we were unable to recover it. 00:28:41.255 [2024-11-26 07:38:09.222749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.255 [2024-11-26 07:38:09.222765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.255 qpair failed and we were unable to recover it. 00:28:41.255 [2024-11-26 07:38:09.222916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.255 [2024-11-26 07:38:09.222963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.255 qpair failed and we were unable to recover it. 00:28:41.255 [2024-11-26 07:38:09.223109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.255 [2024-11-26 07:38:09.223143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.255 qpair failed and we were unable to recover it. 00:28:41.255 [2024-11-26 07:38:09.223289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.255 [2024-11-26 07:38:09.223323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.255 qpair failed and we were unable to recover it. 00:28:41.255 [2024-11-26 07:38:09.223584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.255 [2024-11-26 07:38:09.223601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.255 qpair failed and we were unable to recover it. 00:28:41.255 [2024-11-26 07:38:09.223819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.255 [2024-11-26 07:38:09.223836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.255 qpair failed and we were unable to recover it. 00:28:41.255 [2024-11-26 07:38:09.224056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.255 [2024-11-26 07:38:09.224075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.255 qpair failed and we were unable to recover it. 00:28:41.255 [2024-11-26 07:38:09.224225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.255 [2024-11-26 07:38:09.224242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.255 qpair failed and we were unable to recover it. 00:28:41.255 [2024-11-26 07:38:09.224492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.256 [2024-11-26 07:38:09.224526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.256 qpair failed and we were unable to recover it. 00:28:41.256 [2024-11-26 07:38:09.224749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.256 [2024-11-26 07:38:09.224784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.256 qpair failed and we were unable to recover it. 00:28:41.256 [2024-11-26 07:38:09.224921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.256 [2024-11-26 07:38:09.224963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.256 qpair failed and we were unable to recover it. 00:28:41.256 [2024-11-26 07:38:09.225247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.256 [2024-11-26 07:38:09.225282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.256 qpair failed and we were unable to recover it. 00:28:41.256 [2024-11-26 07:38:09.225475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.256 [2024-11-26 07:38:09.225492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.256 qpair failed and we were unable to recover it. 00:28:41.256 [2024-11-26 07:38:09.225763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.256 [2024-11-26 07:38:09.225781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.256 qpair failed and we were unable to recover it. 00:28:41.256 [2024-11-26 07:38:09.226016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.256 [2024-11-26 07:38:09.226034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.256 qpair failed and we were unable to recover it. 00:28:41.256 [2024-11-26 07:38:09.226199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.256 [2024-11-26 07:38:09.226216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.256 qpair failed and we were unable to recover it. 00:28:41.257 [2024-11-26 07:38:09.226457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.257 [2024-11-26 07:38:09.226491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.257 qpair failed and we were unable to recover it. 00:28:41.257 [2024-11-26 07:38:09.226771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.257 [2024-11-26 07:38:09.226806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.257 qpair failed and we were unable to recover it. 00:28:41.257 [2024-11-26 07:38:09.227086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.257 [2024-11-26 07:38:09.227121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.257 qpair failed and we were unable to recover it. 00:28:41.257 [2024-11-26 07:38:09.227262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.257 [2024-11-26 07:38:09.227295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.257 qpair failed and we were unable to recover it. 00:28:41.257 [2024-11-26 07:38:09.227548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.257 [2024-11-26 07:38:09.227565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.257 qpair failed and we were unable to recover it. 00:28:41.257 [2024-11-26 07:38:09.227731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.257 [2024-11-26 07:38:09.227748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.257 qpair failed and we were unable to recover it. 00:28:41.257 [2024-11-26 07:38:09.227932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.257 [2024-11-26 07:38:09.227958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.257 qpair failed and we were unable to recover it. 00:28:41.257 [2024-11-26 07:38:09.228186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.257 [2024-11-26 07:38:09.228220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.257 qpair failed and we were unable to recover it. 00:28:41.257 [2024-11-26 07:38:09.228436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.258 [2024-11-26 07:38:09.228453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.258 qpair failed and we were unable to recover it. 00:28:41.258 [2024-11-26 07:38:09.228710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.258 [2024-11-26 07:38:09.228744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.258 qpair failed and we were unable to recover it. 00:28:41.258 [2024-11-26 07:38:09.228936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.258 [2024-11-26 07:38:09.228986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.258 qpair failed and we were unable to recover it. 00:28:41.258 [2024-11-26 07:38:09.229224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.258 [2024-11-26 07:38:09.229260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.258 qpair failed and we were unable to recover it. 00:28:41.258 [2024-11-26 07:38:09.229454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.258 [2024-11-26 07:38:09.229472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.258 qpair failed and we were unable to recover it. 00:28:41.258 [2024-11-26 07:38:09.229720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.258 [2024-11-26 07:38:09.229754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.258 qpair failed and we were unable to recover it. 00:28:41.258 [2024-11-26 07:38:09.230059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.258 [2024-11-26 07:38:09.230095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.258 qpair failed and we were unable to recover it. 00:28:41.258 [2024-11-26 07:38:09.230297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.258 [2024-11-26 07:38:09.230332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.258 qpair failed and we were unable to recover it. 00:28:41.258 [2024-11-26 07:38:09.230530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.259 [2024-11-26 07:38:09.230564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.259 qpair failed and we were unable to recover it. 00:28:41.259 [2024-11-26 07:38:09.230774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.259 [2024-11-26 07:38:09.230808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.259 qpair failed and we were unable to recover it. 00:28:41.259 [2024-11-26 07:38:09.231078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.259 [2024-11-26 07:38:09.231113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.259 qpair failed and we were unable to recover it. 00:28:41.259 [2024-11-26 07:38:09.231280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.259 [2024-11-26 07:38:09.231314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.259 qpair failed and we were unable to recover it. 00:28:41.259 [2024-11-26 07:38:09.231616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.259 [2024-11-26 07:38:09.231650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.259 qpair failed and we were unable to recover it. 00:28:41.259 [2024-11-26 07:38:09.231836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.259 [2024-11-26 07:38:09.231854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.259 qpair failed and we were unable to recover it. 00:28:41.259 [2024-11-26 07:38:09.232005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.259 [2024-11-26 07:38:09.232022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.259 qpair failed and we were unable to recover it. 00:28:41.259 [2024-11-26 07:38:09.232185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.259 [2024-11-26 07:38:09.232222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.259 qpair failed and we were unable to recover it. 00:28:41.259 [2024-11-26 07:38:09.232428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.259 [2024-11-26 07:38:09.232462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.259 qpair failed and we were unable to recover it. 00:28:41.259 [2024-11-26 07:38:09.232761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.260 [2024-11-26 07:38:09.232794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.260 qpair failed and we were unable to recover it. 00:28:41.260 [2024-11-26 07:38:09.233100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.260 [2024-11-26 07:38:09.233136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.260 qpair failed and we were unable to recover it. 00:28:41.260 [2024-11-26 07:38:09.233344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.260 [2024-11-26 07:38:09.233378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.260 qpair failed and we were unable to recover it. 00:28:41.260 [2024-11-26 07:38:09.233583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.260 [2024-11-26 07:38:09.233617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.260 qpair failed and we were unable to recover it. 00:28:41.260 [2024-11-26 07:38:09.233878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.260 [2024-11-26 07:38:09.233912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.260 qpair failed and we were unable to recover it. 00:28:41.260 [2024-11-26 07:38:09.234129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.260 [2024-11-26 07:38:09.234164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.260 qpair failed and we were unable to recover it. 00:28:41.260 [2024-11-26 07:38:09.234349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.260 [2024-11-26 07:38:09.234366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.260 qpair failed and we were unable to recover it. 00:28:41.260 [2024-11-26 07:38:09.234562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.260 [2024-11-26 07:38:09.234595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.260 qpair failed and we were unable to recover it. 00:28:41.260 [2024-11-26 07:38:09.234828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.260 [2024-11-26 07:38:09.234864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.260 qpair failed and we were unable to recover it. 00:28:41.260 [2024-11-26 07:38:09.235017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.261 [2024-11-26 07:38:09.235054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.261 qpair failed and we were unable to recover it. 00:28:41.261 [2024-11-26 07:38:09.235341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.261 [2024-11-26 07:38:09.235380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.261 qpair failed and we were unable to recover it. 00:28:41.261 [2024-11-26 07:38:09.235539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.261 [2024-11-26 07:38:09.235557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.261 qpair failed and we were unable to recover it. 00:28:41.261 [2024-11-26 07:38:09.235740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.261 [2024-11-26 07:38:09.235758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.261 qpair failed and we were unable to recover it. 00:28:41.261 [2024-11-26 07:38:09.235969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.261 [2024-11-26 07:38:09.236005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.261 qpair failed and we were unable to recover it. 00:28:41.261 [2024-11-26 07:38:09.236241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.261 [2024-11-26 07:38:09.236276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.261 qpair failed and we were unable to recover it. 00:28:41.261 [2024-11-26 07:38:09.236473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.261 [2024-11-26 07:38:09.236491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.261 qpair failed and we were unable to recover it. 00:28:41.261 [2024-11-26 07:38:09.236656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.261 [2024-11-26 07:38:09.236690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.261 qpair failed and we were unable to recover it. 00:28:41.262 [2024-11-26 07:38:09.236961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.262 [2024-11-26 07:38:09.236997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.262 qpair failed and we were unable to recover it. 00:28:41.262 [2024-11-26 07:38:09.237186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.262 [2024-11-26 07:38:09.237220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.262 qpair failed and we were unable to recover it. 00:28:41.262 [2024-11-26 07:38:09.237497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.262 [2024-11-26 07:38:09.237531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.262 qpair failed and we were unable to recover it. 00:28:41.262 [2024-11-26 07:38:09.237794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.262 [2024-11-26 07:38:09.237829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.262 qpair failed and we were unable to recover it. 00:28:41.262 [2024-11-26 07:38:09.238046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.262 [2024-11-26 07:38:09.238083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.262 qpair failed and we were unable to recover it. 00:28:41.262 [2024-11-26 07:38:09.238286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.262 [2024-11-26 07:38:09.238320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.262 qpair failed and we were unable to recover it. 00:28:41.262 [2024-11-26 07:38:09.238463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.262 [2024-11-26 07:38:09.238498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.262 qpair failed and we were unable to recover it. 00:28:41.262 [2024-11-26 07:38:09.238758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.262 [2024-11-26 07:38:09.238791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.262 qpair failed and we were unable to recover it. 00:28:41.262 [2024-11-26 07:38:09.239091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.262 [2024-11-26 07:38:09.239130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.262 qpair failed and we were unable to recover it. 00:28:41.262 [2024-11-26 07:38:09.239285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.262 [2024-11-26 07:38:09.239324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.262 qpair failed and we were unable to recover it. 00:28:41.263 [2024-11-26 07:38:09.239593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.263 [2024-11-26 07:38:09.239627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.263 qpair failed and we were unable to recover it. 00:28:41.263 [2024-11-26 07:38:09.239880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.263 [2024-11-26 07:38:09.239914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.263 qpair failed and we were unable to recover it. 00:28:41.263 [2024-11-26 07:38:09.240132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.263 [2024-11-26 07:38:09.240166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.263 qpair failed and we were unable to recover it. 00:28:41.263 [2024-11-26 07:38:09.240329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.263 [2024-11-26 07:38:09.240346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.263 qpair failed and we were unable to recover it. 00:28:41.263 [2024-11-26 07:38:09.240546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.263 [2024-11-26 07:38:09.240580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.263 qpair failed and we were unable to recover it. 00:28:41.263 [2024-11-26 07:38:09.240779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.263 [2024-11-26 07:38:09.240812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.263 qpair failed and we were unable to recover it. 00:28:41.263 [2024-11-26 07:38:09.240999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.263 [2024-11-26 07:38:09.241034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.263 qpair failed and we were unable to recover it. 00:28:41.263 [2024-11-26 07:38:09.241236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.263 [2024-11-26 07:38:09.241270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.263 qpair failed and we were unable to recover it. 00:28:41.264 [2024-11-26 07:38:09.241457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.264 [2024-11-26 07:38:09.241491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.264 qpair failed and we were unable to recover it. 00:28:41.264 [2024-11-26 07:38:09.241698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.264 [2024-11-26 07:38:09.241733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.264 qpair failed and we were unable to recover it. 00:28:41.264 [2024-11-26 07:38:09.242021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.264 [2024-11-26 07:38:09.242057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.264 qpair failed and we were unable to recover it. 00:28:41.264 [2024-11-26 07:38:09.242331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.264 [2024-11-26 07:38:09.242366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.264 qpair failed and we were unable to recover it. 00:28:41.264 [2024-11-26 07:38:09.242606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.264 [2024-11-26 07:38:09.242642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.264 qpair failed and we were unable to recover it. 00:28:41.264 [2024-11-26 07:38:09.242969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.264 [2024-11-26 07:38:09.243005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.264 qpair failed and we were unable to recover it. 00:28:41.265 [2024-11-26 07:38:09.243288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.265 [2024-11-26 07:38:09.243322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.265 qpair failed and we were unable to recover it. 00:28:41.265 [2024-11-26 07:38:09.243527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.265 [2024-11-26 07:38:09.243561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.265 qpair failed and we were unable to recover it. 00:28:41.265 [2024-11-26 07:38:09.243753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.265 [2024-11-26 07:38:09.243771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.265 qpair failed and we were unable to recover it. 00:28:41.265 [2024-11-26 07:38:09.243935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.265 [2024-11-26 07:38:09.243959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.265 qpair failed and we were unable to recover it. 00:28:41.265 [2024-11-26 07:38:09.244124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.265 [2024-11-26 07:38:09.244161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.265 qpair failed and we were unable to recover it. 00:28:41.265 [2024-11-26 07:38:09.244448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.265 [2024-11-26 07:38:09.244482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.265 qpair failed and we were unable to recover it. 00:28:41.265 [2024-11-26 07:38:09.244703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.265 [2024-11-26 07:38:09.244737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.265 qpair failed and we were unable to recover it. 00:28:41.265 [2024-11-26 07:38:09.244991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.265 [2024-11-26 07:38:09.245028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.265 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.245226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.266 [2024-11-26 07:38:09.245273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.266 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.245451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.266 [2024-11-26 07:38:09.245468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.266 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.245765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.266 [2024-11-26 07:38:09.245799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.266 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.246009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.266 [2024-11-26 07:38:09.246045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.266 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.246285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.266 [2024-11-26 07:38:09.246326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.266 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.246626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.266 [2024-11-26 07:38:09.246660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.266 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.246855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.266 [2024-11-26 07:38:09.246889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.266 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.247187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.266 [2024-11-26 07:38:09.247222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.266 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.247485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.266 [2024-11-26 07:38:09.247519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.266 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.247823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.266 [2024-11-26 07:38:09.247857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.266 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.248101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.266 [2024-11-26 07:38:09.248137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.266 qpair failed and we were unable to recover it. 00:28:41.266 [2024-11-26 07:38:09.248344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.267 [2024-11-26 07:38:09.248380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.267 qpair failed and we were unable to recover it. 00:28:41.267 [2024-11-26 07:38:09.248650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.267 [2024-11-26 07:38:09.248667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.267 qpair failed and we were unable to recover it. 00:28:41.267 [2024-11-26 07:38:09.248841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.267 [2024-11-26 07:38:09.248874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.267 qpair failed and we were unable to recover it. 00:28:41.268 [2024-11-26 07:38:09.249071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.268 [2024-11-26 07:38:09.249107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.268 qpair failed and we were unable to recover it. 00:28:41.268 [2024-11-26 07:38:09.249397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.268 [2024-11-26 07:38:09.249430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.268 qpair failed and we were unable to recover it. 00:28:41.268 [2024-11-26 07:38:09.249699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.268 [2024-11-26 07:38:09.249716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.268 qpair failed and we were unable to recover it. 00:28:41.268 [2024-11-26 07:38:09.249963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.268 [2024-11-26 07:38:09.249981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.268 qpair failed and we were unable to recover it. 00:28:41.268 [2024-11-26 07:38:09.250229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.268 [2024-11-26 07:38:09.250246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.268 qpair failed and we were unable to recover it. 00:28:41.268 [2024-11-26 07:38:09.250353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.268 [2024-11-26 07:38:09.250370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.269 qpair failed and we were unable to recover it. 00:28:41.269 [2024-11-26 07:38:09.250605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.269 [2024-11-26 07:38:09.250640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.269 qpair failed and we were unable to recover it. 00:28:41.269 [2024-11-26 07:38:09.250851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.269 [2024-11-26 07:38:09.250884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.269 qpair failed and we were unable to recover it. 00:28:41.269 [2024-11-26 07:38:09.251159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.269 [2024-11-26 07:38:09.251194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.269 qpair failed and we were unable to recover it. 00:28:41.269 [2024-11-26 07:38:09.251351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.269 [2024-11-26 07:38:09.251386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.269 qpair failed and we were unable to recover it. 00:28:41.269 [2024-11-26 07:38:09.251597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.269 [2024-11-26 07:38:09.251630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.269 qpair failed and we were unable to recover it. 00:28:41.269 [2024-11-26 07:38:09.251915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.269 [2024-11-26 07:38:09.251932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.269 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.252168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.252186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.252431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.252449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.252662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.252679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.252772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.252789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.252977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.253013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.253222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.253257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.253520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.253554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.253685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.253703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.253945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.254012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.254219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.254253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.254509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.254543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.270 [2024-11-26 07:38:09.254729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.270 [2024-11-26 07:38:09.254763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.270 qpair failed and we were unable to recover it. 00:28:41.271 [2024-11-26 07:38:09.255048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.271 [2024-11-26 07:38:09.255085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.271 qpair failed and we were unable to recover it. 00:28:41.271 [2024-11-26 07:38:09.255369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.271 [2024-11-26 07:38:09.255404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.271 qpair failed and we were unable to recover it. 00:28:41.271 [2024-11-26 07:38:09.255599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.271 [2024-11-26 07:38:09.255634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.271 qpair failed and we were unable to recover it. 00:28:41.271 [2024-11-26 07:38:09.255822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.271 [2024-11-26 07:38:09.255855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.271 qpair failed and we were unable to recover it. 00:28:41.271 [2024-11-26 07:38:09.256080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.271 [2024-11-26 07:38:09.256115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.271 qpair failed and we were unable to recover it. 00:28:41.271 [2024-11-26 07:38:09.256307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.271 [2024-11-26 07:38:09.256341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.271 qpair failed and we were unable to recover it. 00:28:41.271 [2024-11-26 07:38:09.256490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.271 [2024-11-26 07:38:09.256522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.271 qpair failed and we were unable to recover it. 00:28:41.271 [2024-11-26 07:38:09.256783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.271 [2024-11-26 07:38:09.256800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.271 qpair failed and we were unable to recover it. 00:28:41.271 [2024-11-26 07:38:09.256893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.271 [2024-11-26 07:38:09.256910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.271 qpair failed and we were unable to recover it. 00:28:41.271 [2024-11-26 07:38:09.257083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.271 [2024-11-26 07:38:09.257101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.271 qpair failed and we were unable to recover it. 00:28:41.272 [2024-11-26 07:38:09.257271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.272 [2024-11-26 07:38:09.257288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.272 qpair failed and we were unable to recover it. 00:28:41.272 [2024-11-26 07:38:09.257537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.272 [2024-11-26 07:38:09.257571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.272 qpair failed and we were unable to recover it. 00:28:41.272 [2024-11-26 07:38:09.257772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.272 [2024-11-26 07:38:09.257807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.272 qpair failed and we were unable to recover it. 00:28:41.272 [2024-11-26 07:38:09.258113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.272 [2024-11-26 07:38:09.258148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.272 qpair failed and we were unable to recover it. 00:28:41.272 [2024-11-26 07:38:09.258412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.272 [2024-11-26 07:38:09.258447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.272 qpair failed and we were unable to recover it. 00:28:41.272 [2024-11-26 07:38:09.258747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.272 [2024-11-26 07:38:09.258781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.272 qpair failed and we were unable to recover it. 00:28:41.272 [2024-11-26 07:38:09.259003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.272 [2024-11-26 07:38:09.259039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.272 qpair failed and we were unable to recover it. 00:28:41.272 [2024-11-26 07:38:09.259344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.272 [2024-11-26 07:38:09.259378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.272 qpair failed and we were unable to recover it. 00:28:41.272 [2024-11-26 07:38:09.259650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.273 [2024-11-26 07:38:09.259685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.273 qpair failed and we were unable to recover it. 00:28:41.273 [2024-11-26 07:38:09.259942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.273 [2024-11-26 07:38:09.259986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.273 qpair failed and we were unable to recover it. 00:28:41.273 [2024-11-26 07:38:09.260180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.273 [2024-11-26 07:38:09.260214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.273 qpair failed and we were unable to recover it. 00:28:41.273 [2024-11-26 07:38:09.260433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.273 [2024-11-26 07:38:09.260450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.273 qpair failed and we were unable to recover it. 00:28:41.273 [2024-11-26 07:38:09.260640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.273 [2024-11-26 07:38:09.260658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.273 qpair failed and we were unable to recover it. 00:28:41.273 [2024-11-26 07:38:09.260808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.273 [2024-11-26 07:38:09.260825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.273 qpair failed and we were unable to recover it. 00:28:41.273 [2024-11-26 07:38:09.261060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.273 [2024-11-26 07:38:09.261078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.273 qpair failed and we were unable to recover it. 00:28:41.273 [2024-11-26 07:38:09.261168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.273 [2024-11-26 07:38:09.261185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.273 qpair failed and we were unable to recover it. 00:28:41.273 [2024-11-26 07:38:09.261338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.273 [2024-11-26 07:38:09.261356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.273 qpair failed and we were unable to recover it. 00:28:41.274 [2024-11-26 07:38:09.261574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.274 [2024-11-26 07:38:09.261592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.274 qpair failed and we were unable to recover it. 00:28:41.274 [2024-11-26 07:38:09.261688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.274 [2024-11-26 07:38:09.261705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.274 qpair failed and we were unable to recover it. 00:28:41.274 [2024-11-26 07:38:09.261859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.274 [2024-11-26 07:38:09.261876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.274 qpair failed and we were unable to recover it. 00:28:41.274 [2024-11-26 07:38:09.262104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.274 [2024-11-26 07:38:09.262122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.274 qpair failed and we were unable to recover it. 00:28:41.274 [2024-11-26 07:38:09.262290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.274 [2024-11-26 07:38:09.262323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.274 qpair failed and we were unable to recover it. 00:28:41.274 [2024-11-26 07:38:09.262521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.274 [2024-11-26 07:38:09.262555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.274 qpair failed and we were unable to recover it. 00:28:41.274 [2024-11-26 07:38:09.262829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.274 [2024-11-26 07:38:09.262862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.274 qpair failed and we were unable to recover it. 00:28:41.274 [2024-11-26 07:38:09.263128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.274 [2024-11-26 07:38:09.263171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.274 qpair failed and we were unable to recover it. 00:28:41.274 [2024-11-26 07:38:09.263380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.274 [2024-11-26 07:38:09.263413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.274 qpair failed and we were unable to recover it. 00:28:41.274 [2024-11-26 07:38:09.263556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.275 [2024-11-26 07:38:09.263574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.275 qpair failed and we were unable to recover it. 00:28:41.275 [2024-11-26 07:38:09.263796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.275 [2024-11-26 07:38:09.263830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.275 qpair failed and we were unable to recover it. 00:28:41.275 [2024-11-26 07:38:09.264037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.275 [2024-11-26 07:38:09.264072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.275 qpair failed and we were unable to recover it. 00:28:41.275 [2024-11-26 07:38:09.264339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.275 [2024-11-26 07:38:09.264372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.275 qpair failed and we were unable to recover it. 00:28:41.275 [2024-11-26 07:38:09.264673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.275 [2024-11-26 07:38:09.264708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.275 qpair failed and we were unable to recover it. 00:28:41.275 [2024-11-26 07:38:09.264976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.275 [2024-11-26 07:38:09.265011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.275 qpair failed and we were unable to recover it. 00:28:41.275 [2024-11-26 07:38:09.265223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.275 [2024-11-26 07:38:09.265257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.275 qpair failed and we were unable to recover it. 00:28:41.275 [2024-11-26 07:38:09.265400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.275 [2024-11-26 07:38:09.265434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.275 qpair failed and we were unable to recover it. 00:28:41.275 [2024-11-26 07:38:09.265742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.275 [2024-11-26 07:38:09.265776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.275 qpair failed and we were unable to recover it. 00:28:41.275 [2024-11-26 07:38:09.266049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.275 [2024-11-26 07:38:09.266084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.275 qpair failed and we were unable to recover it. 00:28:41.276 [2024-11-26 07:38:09.266346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.276 [2024-11-26 07:38:09.266380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.276 qpair failed and we were unable to recover it. 00:28:41.276 [2024-11-26 07:38:09.266668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.276 [2024-11-26 07:38:09.266702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.276 qpair failed and we were unable to recover it. 00:28:41.276 [2024-11-26 07:38:09.266999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.276 [2024-11-26 07:38:09.267017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.276 qpair failed and we were unable to recover it. 00:28:41.276 [2024-11-26 07:38:09.267261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.276 [2024-11-26 07:38:09.267279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.276 qpair failed and we were unable to recover it. 00:28:41.276 [2024-11-26 07:38:09.267469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.276 [2024-11-26 07:38:09.267486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.276 qpair failed and we were unable to recover it. 00:28:41.276 [2024-11-26 07:38:09.267755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.276 [2024-11-26 07:38:09.267790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.276 qpair failed and we were unable to recover it. 00:28:41.276 [2024-11-26 07:38:09.268049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.276 [2024-11-26 07:38:09.268085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.276 qpair failed and we were unable to recover it. 00:28:41.276 [2024-11-26 07:38:09.268291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.276 [2024-11-26 07:38:09.268324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.276 qpair failed and we were unable to recover it. 00:28:41.277 [2024-11-26 07:38:09.268531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.277 [2024-11-26 07:38:09.268548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.277 qpair failed and we were unable to recover it. 00:28:41.277 [2024-11-26 07:38:09.268714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.277 [2024-11-26 07:38:09.268748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.277 qpair failed and we were unable to recover it. 00:28:41.277 [2024-11-26 07:38:09.268886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.277 [2024-11-26 07:38:09.268920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.277 qpair failed and we were unable to recover it. 00:28:41.277 [2024-11-26 07:38:09.269156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.277 [2024-11-26 07:38:09.269191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.277 qpair failed and we were unable to recover it. 00:28:41.277 [2024-11-26 07:38:09.269446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.277 [2024-11-26 07:38:09.269463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.277 qpair failed and we were unable to recover it. 00:28:41.277 [2024-11-26 07:38:09.269734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.277 [2024-11-26 07:38:09.269769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.277 qpair failed and we were unable to recover it. 00:28:41.277 [2024-11-26 07:38:09.269900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.277 [2024-11-26 07:38:09.269934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.277 qpair failed and we were unable to recover it. 00:28:41.277 [2024-11-26 07:38:09.270141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.278 [2024-11-26 07:38:09.270182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.278 qpair failed and we were unable to recover it. 00:28:41.278 [2024-11-26 07:38:09.270319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.278 [2024-11-26 07:38:09.270353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.278 qpair failed and we were unable to recover it. 00:28:41.278 [2024-11-26 07:38:09.270501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.278 [2024-11-26 07:38:09.270518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.278 qpair failed and we were unable to recover it. 00:28:41.278 [2024-11-26 07:38:09.270685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.278 [2024-11-26 07:38:09.270720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.278 qpair failed and we were unable to recover it. 00:28:41.278 [2024-11-26 07:38:09.270865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.278 [2024-11-26 07:38:09.270900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.278 qpair failed and we were unable to recover it. 00:28:41.278 [2024-11-26 07:38:09.271194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.278 [2024-11-26 07:38:09.271230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.278 qpair failed and we were unable to recover it. 00:28:41.278 [2024-11-26 07:38:09.271416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.278 [2024-11-26 07:38:09.271451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.279 qpair failed and we were unable to recover it. 00:28:41.279 [2024-11-26 07:38:09.271757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.279 [2024-11-26 07:38:09.271792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.279 qpair failed and we were unable to recover it. 00:28:41.279 [2024-11-26 07:38:09.271984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.279 [2024-11-26 07:38:09.272021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.279 qpair failed and we were unable to recover it. 00:28:41.279 [2024-11-26 07:38:09.272331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.279 [2024-11-26 07:38:09.272365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.279 qpair failed and we were unable to recover it. 00:28:41.279 [2024-11-26 07:38:09.272536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.279 [2024-11-26 07:38:09.272553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.279 qpair failed and we were unable to recover it. 00:28:41.279 [2024-11-26 07:38:09.272735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.279 [2024-11-26 07:38:09.272752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.279 qpair failed and we were unable to recover it. 00:28:41.279 [2024-11-26 07:38:09.272922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.279 [2024-11-26 07:38:09.272939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.279 qpair failed and we were unable to recover it. 00:28:41.279 [2024-11-26 07:38:09.273180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.279 [2024-11-26 07:38:09.273215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.279 qpair failed and we were unable to recover it. 00:28:41.279 [2024-11-26 07:38:09.273435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.279 [2024-11-26 07:38:09.273468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.279 qpair failed and we were unable to recover it. 00:28:41.280 [2024-11-26 07:38:09.273609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.280 [2024-11-26 07:38:09.273643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.280 qpair failed and we were unable to recover it. 00:28:41.280 [2024-11-26 07:38:09.273779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.280 [2024-11-26 07:38:09.273797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.280 qpair failed and we were unable to recover it. 00:28:41.280 [2024-11-26 07:38:09.274004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.280 [2024-11-26 07:38:09.274040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.280 qpair failed and we were unable to recover it. 00:28:41.280 [2024-11-26 07:38:09.274302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.280 [2024-11-26 07:38:09.274336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.280 qpair failed and we were unable to recover it. 00:28:41.280 [2024-11-26 07:38:09.274594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.280 [2024-11-26 07:38:09.274627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.280 qpair failed and we were unable to recover it. 00:28:41.280 [2024-11-26 07:38:09.274815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.280 [2024-11-26 07:38:09.274833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.280 qpair failed and we were unable to recover it. 00:28:41.280 [2024-11-26 07:38:09.274986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.280 [2024-11-26 07:38:09.275003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.280 qpair failed and we were unable to recover it. 00:28:41.281 [2024-11-26 07:38:09.275150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.281 [2024-11-26 07:38:09.275166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.281 qpair failed and we were unable to recover it. 00:28:41.281 [2024-11-26 07:38:09.275331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.281 [2024-11-26 07:38:09.275348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.281 qpair failed and we were unable to recover it. 00:28:41.281 [2024-11-26 07:38:09.275470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.281 [2024-11-26 07:38:09.275487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.281 qpair failed and we were unable to recover it. 00:28:41.281 [2024-11-26 07:38:09.275669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.281 [2024-11-26 07:38:09.275686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.281 qpair failed and we were unable to recover it. 00:28:41.281 [2024-11-26 07:38:09.275883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.281 [2024-11-26 07:38:09.275899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.281 qpair failed and we were unable to recover it. 00:28:41.281 [2024-11-26 07:38:09.275994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.281 [2024-11-26 07:38:09.276016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.281 qpair failed and we were unable to recover it. 00:28:41.281 [2024-11-26 07:38:09.276130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.281 [2024-11-26 07:38:09.276147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.281 qpair failed and we were unable to recover it. 00:28:41.281 [2024-11-26 07:38:09.276238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.281 [2024-11-26 07:38:09.276254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.281 qpair failed and we were unable to recover it. 00:28:41.281 [2024-11-26 07:38:09.276418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.281 [2024-11-26 07:38:09.276435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.281 qpair failed and we were unable to recover it. 00:28:41.281 [2024-11-26 07:38:09.276590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.281 [2024-11-26 07:38:09.276624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.281 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.276818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.276851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.282 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.277106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.277142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.282 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.277267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.277301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.282 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.277504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.277538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.282 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.277726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.277760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.282 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.277965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.278000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.282 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.278185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.278219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.282 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.278427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.278460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.282 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.278646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.278679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.282 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.278926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.278983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.282 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.279240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.279281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.282 qpair failed and we were unable to recover it. 00:28:41.282 [2024-11-26 07:38:09.279493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.282 [2024-11-26 07:38:09.279509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.283 qpair failed and we were unable to recover it. 00:28:41.283 [2024-11-26 07:38:09.279725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.283 [2024-11-26 07:38:09.279739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.283 qpair failed and we were unable to recover it. 00:28:41.283 [2024-11-26 07:38:09.279838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.283 [2024-11-26 07:38:09.279874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.283 qpair failed and we were unable to recover it. 00:28:41.283 [2024-11-26 07:38:09.280012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.283 [2024-11-26 07:38:09.280046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.283 qpair failed and we were unable to recover it. 00:28:41.283 [2024-11-26 07:38:09.280175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.283 [2024-11-26 07:38:09.280211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.283 qpair failed and we were unable to recover it. 00:28:41.283 [2024-11-26 07:38:09.280489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.283 [2024-11-26 07:38:09.280523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.283 qpair failed and we were unable to recover it. 00:28:41.283 [2024-11-26 07:38:09.280720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.283 [2024-11-26 07:38:09.280754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.283 qpair failed and we were unable to recover it. 00:28:41.283 [2024-11-26 07:38:09.281014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.283 [2024-11-26 07:38:09.281050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.283 qpair failed and we were unable to recover it. 00:28:41.283 [2024-11-26 07:38:09.281196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.283 [2024-11-26 07:38:09.281230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.283 qpair failed and we were unable to recover it. 00:28:41.283 [2024-11-26 07:38:09.281437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.283 [2024-11-26 07:38:09.281470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.283 qpair failed and we were unable to recover it. 00:28:41.283 [2024-11-26 07:38:09.281749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.283 [2024-11-26 07:38:09.281783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.284 qpair failed and we were unable to recover it. 00:28:41.284 [2024-11-26 07:38:09.282078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.284 [2024-11-26 07:38:09.282130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.284 qpair failed and we were unable to recover it. 00:28:41.284 [2024-11-26 07:38:09.282333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.284 [2024-11-26 07:38:09.282366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.284 qpair failed and we were unable to recover it. 00:28:41.284 [2024-11-26 07:38:09.282515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.284 [2024-11-26 07:38:09.282549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.284 qpair failed and we were unable to recover it. 00:28:41.284 [2024-11-26 07:38:09.282798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.284 [2024-11-26 07:38:09.282812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.284 qpair failed and we were unable to recover it. 00:28:41.284 [2024-11-26 07:38:09.283033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.284 [2024-11-26 07:38:09.283047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.284 qpair failed and we were unable to recover it. 00:28:41.284 [2024-11-26 07:38:09.283216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.284 [2024-11-26 07:38:09.283229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.284 qpair failed and we were unable to recover it. 00:28:41.284 [2024-11-26 07:38:09.283385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.284 [2024-11-26 07:38:09.283419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.284 qpair failed and we were unable to recover it. 00:28:41.284 [2024-11-26 07:38:09.283613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.284 [2024-11-26 07:38:09.283646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.284 qpair failed and we were unable to recover it. 00:28:41.284 [2024-11-26 07:38:09.283843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.285 [2024-11-26 07:38:09.283877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.285 qpair failed and we were unable to recover it. 00:28:41.285 [2024-11-26 07:38:09.284146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.285 [2024-11-26 07:38:09.284181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.285 qpair failed and we were unable to recover it. 00:28:41.285 [2024-11-26 07:38:09.284368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.285 [2024-11-26 07:38:09.284402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.285 qpair failed and we were unable to recover it. 00:28:41.285 [2024-11-26 07:38:09.284592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.285 [2024-11-26 07:38:09.284625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.285 qpair failed and we were unable to recover it. 00:28:41.285 [2024-11-26 07:38:09.284813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.285 [2024-11-26 07:38:09.284846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.285 qpair failed and we were unable to recover it. 00:28:41.285 [2024-11-26 07:38:09.284968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.285 [2024-11-26 07:38:09.285003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.285 qpair failed and we were unable to recover it. 00:28:41.285 [2024-11-26 07:38:09.285204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.285 [2024-11-26 07:38:09.285238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.285 qpair failed and we were unable to recover it. 00:28:41.285 [2024-11-26 07:38:09.285418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.285 [2024-11-26 07:38:09.285432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.285 qpair failed and we were unable to recover it. 00:28:41.285 [2024-11-26 07:38:09.285527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.285 [2024-11-26 07:38:09.285540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.285 qpair failed and we were unable to recover it. 00:28:41.286 [2024-11-26 07:38:09.285609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.286 [2024-11-26 07:38:09.285622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.286 qpair failed and we were unable to recover it. 00:28:41.286 [2024-11-26 07:38:09.285779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.286 [2024-11-26 07:38:09.285792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.286 qpair failed and we were unable to recover it. 00:28:41.286 [2024-11-26 07:38:09.285934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.286 [2024-11-26 07:38:09.285955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.286 qpair failed and we were unable to recover it. 00:28:41.286 [2024-11-26 07:38:09.286108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.286 [2024-11-26 07:38:09.286142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.286 qpair failed and we were unable to recover it. 00:28:41.286 [2024-11-26 07:38:09.286278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.286 [2024-11-26 07:38:09.286311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.286 qpair failed and we were unable to recover it. 00:28:41.286 [2024-11-26 07:38:09.286429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.286 [2024-11-26 07:38:09.286461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.286 qpair failed and we were unable to recover it. 00:28:41.286 [2024-11-26 07:38:09.286615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.286 [2024-11-26 07:38:09.286649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.286 qpair failed and we were unable to recover it. 00:28:41.286 [2024-11-26 07:38:09.286858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.286 [2024-11-26 07:38:09.286871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.286 qpair failed and we were unable to recover it. 00:28:41.286 [2024-11-26 07:38:09.286964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.286 [2024-11-26 07:38:09.286978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.286 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.287156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.287169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.287349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.287362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.287514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.287527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.287595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.287607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.287693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.287705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.287822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.287855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.288043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.288077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.288359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.288393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.288519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.288552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.288805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.288839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.289091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.289105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.289286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.289298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.289411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.289445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.289668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.289701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.289816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.289856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.290049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.290062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.290218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.290232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.290371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.290384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.290473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.290486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.290650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.290663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.290893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.290906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.291013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.291027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.291116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.291129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.291215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.291228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.291331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.291345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.291443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.291456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.291601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.291614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.291735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.291770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.291986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.287 [2024-11-26 07:38:09.292022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.287 qpair failed and we were unable to recover it. 00:28:41.287 [2024-11-26 07:38:09.292161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.292196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.292388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.292421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.292559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.292593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.292789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.292823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.293048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.293061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.293292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.293305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.293395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.293407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.293565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.293598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.293836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.293869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.293983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.294016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.294201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.294234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.294365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.294398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.294604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.294638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.294765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.294798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.288 [2024-11-26 07:38:09.295001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.288 [2024-11-26 07:38:09.295015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.288 qpair failed and we were unable to recover it. 00:28:41.584 [2024-11-26 07:38:09.295156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.584 [2024-11-26 07:38:09.295169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.584 qpair failed and we were unable to recover it. 00:28:41.584 [2024-11-26 07:38:09.295249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.584 [2024-11-26 07:38:09.295262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.584 qpair failed and we were unable to recover it. 00:28:41.584 [2024-11-26 07:38:09.295348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.584 [2024-11-26 07:38:09.295360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.584 qpair failed and we were unable to recover it. 00:28:41.584 [2024-11-26 07:38:09.295499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.584 [2024-11-26 07:38:09.295512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.584 qpair failed and we were unable to recover it. 00:28:41.584 [2024-11-26 07:38:09.295618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.584 [2024-11-26 07:38:09.295631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.584 qpair failed and we were unable to recover it. 00:28:41.584 [2024-11-26 07:38:09.295783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.584 [2024-11-26 07:38:09.295796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.584 qpair failed and we were unable to recover it. 00:28:41.584 [2024-11-26 07:38:09.295876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.584 [2024-11-26 07:38:09.295890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.584 qpair failed and we were unable to recover it. 00:28:41.584 [2024-11-26 07:38:09.296040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.296054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.296200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.296213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.296367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.296380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.296537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.296552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.296623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.296636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.296786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.296799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.296892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.296905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.297041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.297054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.297209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.297223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.297434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.297447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.297595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.297608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.297691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.297704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.297865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.297879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.297977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.297990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.298087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.298100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.298249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.298261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.298356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.298369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.298452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.298465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.298602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.298614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.298696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.298709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.298801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.298813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.298890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.298903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.299079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.299093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.299159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.299172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.299358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.299371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.299455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.299468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.299618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.299631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.299787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.299820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.299961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.300009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.300155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.300189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.300445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.300527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.300782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.300861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.301032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.301072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.301327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.301362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.301499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.301532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.301827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.301861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.302057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.302075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.585 [2024-11-26 07:38:09.302184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.585 [2024-11-26 07:38:09.302218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.585 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.302491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.302525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.302672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.302705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.302891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.302907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.302993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.303039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.303296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.303331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.303456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.303501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.303681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.303699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.303877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.303909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.304141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.304176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.304297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.304330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.304458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.304491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.304674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.304691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.304800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.304834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.305024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.305059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.305272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.305306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.305462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.305480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.305583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.305599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.305875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.305892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.306002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.306020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.306198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.306232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.306373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.306408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.306630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.306664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.306780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.306797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.306910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.306926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.307077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.307094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.307271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.307304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.307531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.307565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.307678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.307710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.307838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.307854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.308075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.308110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.308370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.308403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.308590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.308624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.308831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.308871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.309112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.309161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.309241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.309254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.309390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.309403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.309541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.309554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.586 qpair failed and we were unable to recover it. 00:28:41.586 [2024-11-26 07:38:09.309635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.586 [2024-11-26 07:38:09.309648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.309741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.309775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.309979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.310017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.310132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.310166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.310330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.310376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.310530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.310564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.310755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.310772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.310931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.310952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.311100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.311116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.311306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.311324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.311413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.311429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.311591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.311609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.311716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.311749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.312009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.312044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.312190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.312223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.312477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.312511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.312691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.312725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.312858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.312892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.313022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.313057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.313337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.313370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.313580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.313614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.313753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.313787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.313847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f32af0 (9): Bad file descriptor 00:28:41.587 [2024-11-26 07:38:09.314159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.314198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.314465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.314500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.314704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.314737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.314999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.315017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.315111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.315129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.315290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.315306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.315493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.315510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.315743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.315760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.315885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.315918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.316121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.316156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.316281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.316314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.316507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.316541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.316739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.316772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.316912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.316928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.317027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.317045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.587 qpair failed and we were unable to recover it. 00:28:41.587 [2024-11-26 07:38:09.317144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.587 [2024-11-26 07:38:09.317162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.317255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.317272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.317343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.317360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.317523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.317538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.317620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.317637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.317716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.317733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.317883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.317917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.318137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.318173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.318305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.318340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.318527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.318559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.318747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.318782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.318929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.318981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.319164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.319197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.319388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.319422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.319601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.319614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.319705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.319717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.319790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.319803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.319893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.319906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.320076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.320089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.320251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.320265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.320549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.320582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.320766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.320799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.320914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.320926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.321075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.321088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.321238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.321271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.321412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.321447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.321627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.321661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.321787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.321820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.322072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.322108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.322239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.322272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.322395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.322428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.322622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.322657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.322912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.322945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.323107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.323141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.323341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.323375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.323639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.323673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.323868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.323902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.324111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.588 [2024-11-26 07:38:09.324144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.588 qpair failed and we were unable to recover it. 00:28:41.588 [2024-11-26 07:38:09.324447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.324523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.324690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.324710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.324881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.324915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.325121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.325155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.325338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.325370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.325647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.325680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.325802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.325835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.326103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.326137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.326335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.326370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.326486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.326503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.326647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.326663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.326888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.326904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.327073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.327092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.327175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.327192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.327364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.327381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.327617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.327634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.327882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.327916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.328071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.328104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.328298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.328331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.328470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.328502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.328694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.328711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.328814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.328848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.329050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.329085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.329220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.329254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.329503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.329537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.329718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.329752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.330041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.330076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.330306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.330345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.330542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.330575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.330795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.330827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.330973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.331008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.331262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.331296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.589 [2024-11-26 07:38:09.331478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.589 [2024-11-26 07:38:09.331511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.589 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.331778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.331811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.331973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.332007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.332218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.332252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.332442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.332474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.332670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.332703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.332943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.332960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.333055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.333068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.333209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.333259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.333395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.333429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.333609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.333643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.333917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.333958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.334089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.334122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.334245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.334278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.334469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.334502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.334685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.334698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.334844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.334877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.335156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.335190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.335318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.335351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.335487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.335519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.335773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.335807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.336065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.336099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.336290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.336323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.336569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.336602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.336800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.336812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.336999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.337033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.337236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.337269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.337485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.337522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.337664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.337676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.337828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.337862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.338068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.338103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.338241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.338274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.338466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.338500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.338759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.338771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.338872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.338912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.339183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.339218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.339362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.339395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.339610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.590 [2024-11-26 07:38:09.339643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.590 qpair failed and we were unable to recover it. 00:28:41.590 [2024-11-26 07:38:09.339774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.339807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.339946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.339999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.340219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.340252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.340460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.340492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.340679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.340718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.340936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.340954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.341098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.341110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.341278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.341311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.341515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.341548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.341677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.341710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.341835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.341880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.341957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.341970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.342102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.342114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.342313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.342325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.342524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.342536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.342684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.342696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.342786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.342798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.342956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.342991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.343189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.343223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.343344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.343377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.343515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.343555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.343714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.343727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.343803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.343815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.343893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.343906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.344127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.344141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.344294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.344307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.344399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.344432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.344552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.344586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.344769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.344801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.344929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.344973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.345175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.345208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.345332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.345366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.345618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.345650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.345845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.345875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.346039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.346051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.346160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.346193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.346337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.346368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.346560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.346600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.591 qpair failed and we were unable to recover it. 00:28:41.591 [2024-11-26 07:38:09.346780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.591 [2024-11-26 07:38:09.346812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.346925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.346968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.347176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.347209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.347393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.347427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.347621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.347653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.347899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.347933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.348138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.348171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.348366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.348399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.348529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.348541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.348709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.348720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.348819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.348851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.349044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.349079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.349273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.349305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.349519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.349553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.349733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.349745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.349827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.349838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.349942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.349983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.350183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.350220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.350404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.350445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.350642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.350655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.350790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.350801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.350968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.351003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.351252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.351284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.351405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.351437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.351613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.351625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.351751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.351763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.352037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.352073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.352197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.352229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.352440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.352472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.352608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.352620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.352818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.352830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.352980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.352993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.353065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.353076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.353268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.353301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.353424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.353458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.353576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.353608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.353700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.353711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.353782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.592 [2024-11-26 07:38:09.353794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.592 qpair failed and we were unable to recover it. 00:28:41.592 [2024-11-26 07:38:09.353876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.353887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.353956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.353970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.354049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.354061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.354217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.354249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.354426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.354458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.354582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.354615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.354739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.354780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.354859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.354871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.354945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.354961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.355203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.355215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.355374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.355386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.355562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.355574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.355639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.355651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.355732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.355745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.355818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.355830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.355974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.355986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.356141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.356174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.356294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.356326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.356514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.356547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.356716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.356727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.356944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.357005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.357119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.357152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.357333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.357367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.357562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.357594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.357704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.357737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.358020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.358054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.358172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.358205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.358451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.358483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.358615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.358648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.358836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.358870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.359068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.359103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.359285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.359317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.359546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.359578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.359699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.359731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.593 [2024-11-26 07:38:09.359855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.593 [2024-11-26 07:38:09.359867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.593 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.359964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.359976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.360041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.360053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.360134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.360146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.360303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.360314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.360401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.360413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.360548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.360560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.360674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.360714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.360907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.360938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.361082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.361115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.361358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.361390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.361657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.361701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.361832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.361844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.361933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.361945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.362011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.362022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.362177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.362189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.362331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.362342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.362541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.362552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.362620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.362632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.362711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.362738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.362861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.362894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.363021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.363055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.363251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.363284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.363405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.363436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.363621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.363655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.363901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.363913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.364011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.364040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.364245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.364278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.364409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.364441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.364633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.364666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.364879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.364911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.365101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.365136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.365316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.365348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.365472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.365505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.365646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.365680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.365854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.365887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.366006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.366018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.594 [2024-11-26 07:38:09.366154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.594 [2024-11-26 07:38:09.366166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.594 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.366372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.366384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.366465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.366477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.366545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.366556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.366627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.366660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.366774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.366808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.366987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.367020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.367196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.367230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.367366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.367398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.367584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.367617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.367801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.367840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.368035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.368069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.368247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.368281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.368475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.368507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.368621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.368655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.368853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.368886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.369146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.369180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.369358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.369391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.369578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.369611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.369804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.369836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.370006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.370019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.370161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.370173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.370393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.370438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.370634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.370666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.370795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.370828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.370929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.370940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.371022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.371034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.371178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.371190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.371266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.371278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.371371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.371384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.371498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.371515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.371665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.371682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.371828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.371845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.371944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.371967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.372083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.372101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.372313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.372333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.372461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.372475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.372635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.372670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.595 [2024-11-26 07:38:09.372803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.595 [2024-11-26 07:38:09.372836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.595 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.372990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.373028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.373214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.373248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.373441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.373474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.373677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.373710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.373817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.373829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.373976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.373988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.374075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.374086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.374228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.374239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.374306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.374318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.374378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.374390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.374524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.374536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.374623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.374637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.374769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.374781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.374901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.374933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.375226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.375260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.375453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.375488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.375602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.375630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.375769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.375780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.375863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.375874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.376023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.376035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.376264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.376296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.376407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.376440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.376636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.376676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.376870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.376882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.376972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.377008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.377145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.377179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.377366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.377400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.377644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.377678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.377765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.377777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.377842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.377854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.377994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.378006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.378083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.378095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.378163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.378174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.378235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.378246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.378331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.378342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.378466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.378477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.378613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.378624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.596 [2024-11-26 07:38:09.378695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.596 [2024-11-26 07:38:09.378706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.596 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.378802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.378835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.378973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.379007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.379193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.379227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.379441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.379475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.379596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.379628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.379748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.379760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.379890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.379902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.380037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.380050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.380126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.380138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.380204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.380216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.380315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.380347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.380526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.380558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.380747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.380781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.381025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.381066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.381324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.381357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.381470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.381519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.381706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.381740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.381987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.382023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.382141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.382153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.382364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.382396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.382514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.382546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.382747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.382779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.382961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.382974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.383116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.383148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.383341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.383373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.383489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.383521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.383672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.383684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.383821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.597 [2024-11-26 07:38:09.383833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.597 qpair failed and we were unable to recover it. 00:28:41.597 [2024-11-26 07:38:09.384057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.384069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.384302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.384334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.384456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.384488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.384674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.384706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.384863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.384875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.385112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.385124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.385256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.385269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.385339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.385350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.385477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.385489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.385579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.385591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.385732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.385744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.385885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.385918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.386233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.386267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.386457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.386491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.386678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.386712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.386842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.386874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.387007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.387042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.387163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.387196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.387406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.387439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.387617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.387650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.387828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.387861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.388037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.388071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.388181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.388214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.388399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.388432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.388628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.388661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.388768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.388782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.388916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.388928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.389027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.389040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.389119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.389131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.389361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.389395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.389660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.389691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.389822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.389834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.598 [2024-11-26 07:38:09.390049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.598 [2024-11-26 07:38:09.390062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.598 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.390150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.390161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.390299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.390310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.390504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.390536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.390712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.390745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.391012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.391047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.391234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.391268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.391475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.391508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.391685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.391718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.391882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.391895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.392045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.392057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.392136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.392148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.392252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.392285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.392488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.392519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.392719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.392752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.392920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.392931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.393021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.393033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.393111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.393123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.393250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.393261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.393456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.393468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.393626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.393638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.393774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.393785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.393870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.393882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.393968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.393981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.394082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.394115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.394359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.394393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.394574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.394605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.394779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.394812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.395059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.395072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.395242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.395275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.395404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.395436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.599 [2024-11-26 07:38:09.395625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.599 [2024-11-26 07:38:09.395657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.599 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.395767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.395792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.395877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.395891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.396122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.396135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.396212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.396224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.396356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.396388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.396503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.396535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.396718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.396751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.396861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.396872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.396967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.396979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.397147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.397181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.397355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.397387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.397570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.397603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.397778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.397789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.398045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.398080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.398214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.398246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.398426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.398459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.398645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.398677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.398862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.398894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.399179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.399212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.399397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.399430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.399556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.399588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.399696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.399728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.399974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.400007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.400138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.400169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.400347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.400379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.400576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.400608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.400877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.400909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.401065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.401099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.401300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.401333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.401526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.401559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.401671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.401704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.401923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.401964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.402098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.402131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.402324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.402356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.402603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.600 [2024-11-26 07:38:09.402636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.600 qpair failed and we were unable to recover it. 00:28:41.600 [2024-11-26 07:38:09.402823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.402855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.402986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.402998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.403126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.403137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.403213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.403225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.403307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.403319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.403532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.403565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.403761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.403800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.404045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.404079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.404219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.404251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.404367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.404400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.404590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.404622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.404744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.404777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.404912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.404924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.405015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.405027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.405107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.405118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.405201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.405212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.405356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.405367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.405582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.405615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.405795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.405827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.405960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.405994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.406258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.406286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.406465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.406497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.406673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.406705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.406972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.407006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.407195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.407227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.407352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.407385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.407490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.407523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.407700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.407733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.407974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.408007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.408276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.408309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.408486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.408518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.408706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.408739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.408946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.409009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.409202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.601 [2024-11-26 07:38:09.409214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.601 qpair failed and we were unable to recover it. 00:28:41.601 [2024-11-26 07:38:09.409357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.409369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.409504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.409515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.409676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.409709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.409894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.409926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.410116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.410148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.410268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.410301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.410475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.410508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.410749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.410781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.410914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.410954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.411222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.411255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.411358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.411390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.411511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.411543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.411730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.411768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.411880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.411891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.412036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.412048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.412178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.412189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.412356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.412389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.412568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.412599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.412777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.412810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.412982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.412994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.413061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.413072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.413212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.413223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.413296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.413307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.413394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.413406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.413555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.413566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.413700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.413711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.413919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.413961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.414076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.414108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.414376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.414410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.414665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.414697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.414838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.414871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.415053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.415065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.602 qpair failed and we were unable to recover it. 00:28:41.602 [2024-11-26 07:38:09.415190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.602 [2024-11-26 07:38:09.415202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.415350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.415361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.415509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.415541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.415717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.415750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.415868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.415900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.416020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.416054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.416213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.416225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.416292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.416304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.416365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.416376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.416472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.416484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.416682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.416693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.416827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.416866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.416980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.417014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.417126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.417158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.417350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.417382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.417570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.417602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.417787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.417819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.418014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.418048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.418167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.418199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.418336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.418368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.418559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.418597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.418780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.418812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.418919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.418958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.419077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.419110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.419294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.419326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.419515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.419547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.419681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.419713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.419906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.419939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.420117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.420129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.420216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.420228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.420420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.420452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.603 qpair failed and we were unable to recover it. 00:28:41.603 [2024-11-26 07:38:09.420586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.603 [2024-11-26 07:38:09.420618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.420814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.420846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.421021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.421053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.421252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.421285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.421412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.421445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.421687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.421718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.421856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.421868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.422013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.422036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.422172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.422184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.422269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.422281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.422341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.422353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.422441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.422452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.422583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.422595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.422743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.422775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.422989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.423022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.423130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.423161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.423400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.423472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.423689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.423726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.423988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.424024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.424268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.424302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.424427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.424460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.424598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.424630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.424921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.424964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.425227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.425260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.425506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.425538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.425783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.425816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.425976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.425993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.426145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.426160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.604 qpair failed and we were unable to recover it. 00:28:41.604 [2024-11-26 07:38:09.426326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.604 [2024-11-26 07:38:09.426342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.426446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.426462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.426536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.426552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.426743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.426776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.426892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.426925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.427062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.427094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.427272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.427305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.427495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.427528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.427649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.427681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.427897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.427930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.428064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.428097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.428373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.428407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.428601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.428634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.428824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.428857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.429091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.429108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.429259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.429298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.429480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.429514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.429754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.429786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.429922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.429938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.430027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.430043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.430137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.430153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.430303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.430319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.430475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.430507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.430751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.430783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.430924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.430965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.431101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.431118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.431198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.431213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.431317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.431349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.431461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.431493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.431682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.431715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.431910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.431944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.432156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.432189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.432380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.432412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.432604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.432637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.432748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.432780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.432904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.432936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.433144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.433160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.433308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.433323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.605 [2024-11-26 07:38:09.433479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.605 [2024-11-26 07:38:09.433495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.605 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.433569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.433582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.433734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.433745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.433889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.433901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.434123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.434163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.434272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.434304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.434434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.434467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.434731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.434762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.434885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.434918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.435049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.435061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.435204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.435215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.435368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.435400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.435578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.435610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.435737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.435768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.435942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.435984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.436102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.436135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.436324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.436355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.436614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.436646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.436846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.436879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.437065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.437077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.437305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.437339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.437526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.437558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.437695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.437728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.437920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.437961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.438145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.438177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.438302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.438334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.438518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.438551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.438664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.438697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.438913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.438944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.439129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.439141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.439214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.439226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.439450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.439461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.439556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.439589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.439703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.439735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.439960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.439994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.440182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.440193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.440338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.440370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.606 [2024-11-26 07:38:09.440547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.606 [2024-11-26 07:38:09.440579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.606 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.440774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.440807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.440993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.441005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.441173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.441206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.441471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.441502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.441636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.441669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.441828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.441839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.442001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.442041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.442184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.442217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.442427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.442461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.442648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.442679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.442856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.442867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.443007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.443040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.443254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.443286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.443529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.443561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.443668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.443701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.443909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.443940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.444139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.444172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.444293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.444305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.444449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.444482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.444676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.444709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.444916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.444963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.445071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.445083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.445168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.445179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.445328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.445360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.445499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.445532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.445737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.445769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.445966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.445978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.446055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.446066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.446135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.446147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.446215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.446227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.446288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.446300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.446381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.446393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.446524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.446536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.446638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.446650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.446814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.446848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.446973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.447007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.447194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.447226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.607 qpair failed and we were unable to recover it. 00:28:41.607 [2024-11-26 07:38:09.447339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.607 [2024-11-26 07:38:09.447371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.447637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.447670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.447863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.447895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.448079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.448112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.448232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.448245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.448380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.448391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.448537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.448549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.448637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.448648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.448892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.448924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.449107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.449146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.449345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.449357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.449505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.449537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.449713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.449746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.449922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.449964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.450141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.450153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.450351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.450384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.450501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.450533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.450728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.450761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.450942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.450958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.451099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.451131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.451265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.451298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.451476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.451508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.451775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.451808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.452001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.452035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.452228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.452260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.452397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.452429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.452541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.452573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.452790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.452823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.452945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.452999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.453253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.453285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.453501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.453534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.453784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.453816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.454056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.454111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.608 qpair failed and we were unable to recover it. 00:28:41.608 [2024-11-26 07:38:09.454319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.608 [2024-11-26 07:38:09.454331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.454534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.454566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.454756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.454788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.454968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.455046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.455156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.455193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.455374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.455391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.455608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.455642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.455909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.455943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.456105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.456142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.456239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.456255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.456344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.456360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.456611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.456647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.456893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.456925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.457138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.457171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.457355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.457388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.457589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.457604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.457701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.457747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.457968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.458006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.458202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.458234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.458477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.458493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.458700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.458716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.458880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.458912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.459038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.459071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.459248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.459280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.459486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.459518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.459762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.459794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.460036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.460068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.460284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.460316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.460488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.460519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.460708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.460741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.460938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.460955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.461073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.461106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.461304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.461336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.461473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.461506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.461772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.461803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.462043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.462077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.462261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.462291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.462469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.462502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.609 [2024-11-26 07:38:09.462767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.609 [2024-11-26 07:38:09.462798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.609 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.462977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.462990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.463135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.463166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.463311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.463343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.463534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.463565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.463686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.463718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.463916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.463957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.464095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.464127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.464301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.464333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.464595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.464627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.464872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.464905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.465046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.465080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.465263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.465295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.465400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.465433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.465554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.465586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.465835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.465867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.466142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.466176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.466422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.466455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.466710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.466743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.466901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.466913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.467142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.467176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.467349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.467381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.467556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.467589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.467716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.467749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.467942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.467981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.468226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.468258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.468436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.468468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.468653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.468684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.468879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.468912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.469054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.469087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.469326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.469358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.469491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.469523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.469768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.469806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.469903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.469914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.470135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.470169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.470346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.470379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.470508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.470540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.470731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.470763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.610 qpair failed and we were unable to recover it. 00:28:41.610 [2024-11-26 07:38:09.470893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.610 [2024-11-26 07:38:09.470927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.471177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.471209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.471371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.471383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.471525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.471536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.471747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.471758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.471894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.471906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.472053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.472087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.472199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.472231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.472360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.472393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.472526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.472558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.472803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.472836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.473032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.473066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.473327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.473339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.473500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.473533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.473645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.473676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.473868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.473901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.474175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.474208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.474375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.474386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.474557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.474568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.474699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.474731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.474844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.474877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.475103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.475144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.475357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.475368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.475559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.475571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.475715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.475727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.475877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.475909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.476054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.476089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.476292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.476324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.476608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.476641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.476887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.476920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.477109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.477121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.477413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.477445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.477716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.477748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.477921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.477965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.478215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.478254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.478511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.478523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.478611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.478622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.478762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.611 [2024-11-26 07:38:09.478774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.611 qpair failed and we were unable to recover it. 00:28:41.611 [2024-11-26 07:38:09.478916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.478928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.479020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.479032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.479108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.479120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.479311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.479336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.479514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.479546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.479726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.479758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.479928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.479939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.480034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.480046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.480144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.480155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.480224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.480236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.480468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.480500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.480624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.480656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.480929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.480995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.481140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.481152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.481228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.481240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.481445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.481478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.481608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.481640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.481917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.481961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.482084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.482117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.482388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.482420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.482542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.482575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.482719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.482752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.482994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.483028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.483149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.483182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.483424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.483436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.483655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.483667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.483742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.483753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.483901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.483912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.484008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.484020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.484167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.484179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.484247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.484258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.484323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.484334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.484495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.484527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.484673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.484705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.484817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.484848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.485023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.485036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.485167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.485180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.485330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.612 [2024-11-26 07:38:09.485341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.612 qpair failed and we were unable to recover it. 00:28:41.612 [2024-11-26 07:38:09.485430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.485474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.485739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.485772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.485900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.485933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.486122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.486134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.486284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.486316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.486577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.486610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.486803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.486835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.487101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.487136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.487264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.487275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.487505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.487516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.487714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.487742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.488043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.488077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.488217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.488250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.488451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.488462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.488543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.488585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.488773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.488807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.489013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.489048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.489163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.489175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.489316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.489328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.489465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.489477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.489610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.489642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.489851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.489884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.490082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.490125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.490207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.490218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.490345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.490356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.490502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.490514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.490657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.490668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.490849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.490882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.491070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.491104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.491311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.491323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.491394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.491406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.491611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.491644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.491822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.491854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.613 [2024-11-26 07:38:09.492118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.613 [2024-11-26 07:38:09.492130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.613 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.492255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.492267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.492347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.492359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.492507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.492519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.492650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.492682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.492868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.492906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.493113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.493147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.493360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.493393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.493575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.493608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.493729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.493767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.494060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.494095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.494283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.494295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.494365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.494377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.494441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.494452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.494644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.494678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.494857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.494890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.495161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.495173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.495252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.495263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.495457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.495487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.495704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.495737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.495961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.495994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.496110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.496143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.496331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.496364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.496618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.496650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.496774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.496806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.496986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.496999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.497227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.497261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.497436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.497468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.497709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.497741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.497996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.498030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.498158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.498190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.498359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.498370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.498499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.498529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.498706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.498738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.498957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.498990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.499250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.499262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.499470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.499482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.499697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.499730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.614 qpair failed and we were unable to recover it. 00:28:41.614 [2024-11-26 07:38:09.499918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.614 [2024-11-26 07:38:09.499958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.500069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.500106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.500235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.500246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.500338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.500350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.500551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.500584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.500698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.500730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.500852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.500884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.501156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.501196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.501325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.501358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.501544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.501577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.501702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.501734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.501944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.501987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.502186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.502197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.502417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.502450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.502644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.502677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.502871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.502908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.502995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.503007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.503148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.503159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.503236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.503248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.503322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.503333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.503424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.503455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.503642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.503676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.503865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.503898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.504031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.504042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.504189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.504201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.504278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.504289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.504422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.504456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.504650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.504682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.504826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.504858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.505071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.505105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.505296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.505308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.505510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.505542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.505667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.505699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.505945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.505998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.506062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.506073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.506287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.506319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.506508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.506541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.506657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.506688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.615 [2024-11-26 07:38:09.506809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.615 [2024-11-26 07:38:09.506841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.615 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.507094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.507106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.507239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.507251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.507389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.507422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.507596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.507628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.507739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.507770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.508027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.508039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.508132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.508143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.508229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.508261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.508532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.508570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.508779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.508812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.509006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.509040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.509221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.509254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.509538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.509549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.509701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.509733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.509917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.509957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.510162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.510195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.510456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.510488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.510724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.510756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.510943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.510987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.511228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.511261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.511442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.511453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.511619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.511651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.511776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.511808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.511933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.511976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.512112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.512144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.512316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.512348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.512625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.512657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.512785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.512818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.513027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.513061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.513349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.513381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.513501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.513534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.513645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.513677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.513866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.513898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.514152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.514186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.514379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.514411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.514607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.514640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.514882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.514920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.616 qpair failed and we were unable to recover it. 00:28:41.616 [2024-11-26 07:38:09.515141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.616 [2024-11-26 07:38:09.515153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.515311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.515322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.515475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.515507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.515693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.515725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.515848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.515880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.516123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.516156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.516400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.516431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.516608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.516640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.516909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.516942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.517087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.517119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.517220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.517231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.517300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.517324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.517477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.517509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.517719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.517751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.517881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.517912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.518056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.518093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.518232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.518243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.518384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.518416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.518700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.518732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.518926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.518969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.519076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.519088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.519230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.519261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.519449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.519481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.519664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.519695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.519821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.519854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.520040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.520075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.520213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.520245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.520349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.520361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.520491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.520503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.520593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.520604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.520809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.520842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.521019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.521052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.521310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.521342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.521559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.617 [2024-11-26 07:38:09.521591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.617 qpair failed and we were unable to recover it. 00:28:41.617 [2024-11-26 07:38:09.521829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.521862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.522051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.522084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.522269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.522301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.522470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.522482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.522612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.522640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.522834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.522866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.523061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.523094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.523214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.523225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.523352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.523363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.523527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.523558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.523702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.523734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.523923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.523934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.524030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.524043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.524132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.524163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.524355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.524387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.524563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.524595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.524849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.524881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.525123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.525137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.525306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.525338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.525523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.525554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.525748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.525781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.525901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.525933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.526117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.526151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.526347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.526359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.526421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.526432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.526651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.526684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.526814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.526845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.527023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.527056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.527336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.527348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.527431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.527443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.527577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.527588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.527719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.527730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.527823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.527835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.528031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.528066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.528249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.528282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.528464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.528476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.528565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.528577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.618 [2024-11-26 07:38:09.528683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.618 [2024-11-26 07:38:09.528716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.618 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.529043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.529077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.529270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.529302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.529509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.529542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.529734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.529765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.530010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.530043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.530243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.530275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.530473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.530506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.530614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.530647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.530912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.530944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.531077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.531111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.531224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.531236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.531378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.531418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.531591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.531623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.531751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.531784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.531899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.531958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.532088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.532120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.532301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.532334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.532522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.532554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.532815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.532846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.532977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.533017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.533262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.533294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.533504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.533515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.533607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.533638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.533827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.533859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.534046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.534080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.534222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.534254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.534448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.534481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.534614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.534646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.534896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.534928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.535183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.535195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.535270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.535282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.535357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.535368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.535505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.535517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.535647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.535658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.535808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.535841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.536028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.536062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.536240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.536273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.619 qpair failed and we were unable to recover it. 00:28:41.619 [2024-11-26 07:38:09.536448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.619 [2024-11-26 07:38:09.536460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.536523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.536535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.536687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.536698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.536779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.536790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.536874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.536886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.536961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.536973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.537110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.537142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.537415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.537447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.537557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.537590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.537810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.537843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.538098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.538110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.538242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.538253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.538408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.538440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.538684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.538717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.538992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.539039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.539211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.539223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.539376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.539408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.539532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.539564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.539745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.539777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.539965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.539999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.540253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.540265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.540521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.540552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.540747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.540785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.540967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.541001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.541196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.541229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.541415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.541427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.541524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.541535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.541621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.541653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.541917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.541970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.542213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.542245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.542436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.542468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.542590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.542624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.542827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.542859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.543045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.543057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.543122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.543133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.543216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.543227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.543380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.543412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.543550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.620 [2024-11-26 07:38:09.543582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.620 qpair failed and we were unable to recover it. 00:28:41.620 [2024-11-26 07:38:09.543792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.543825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.544016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.544049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.544311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.544344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.544432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.544444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.544656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.544689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.544824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.544856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.545043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.545077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.545252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.545284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.545418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.545440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.545527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.545539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.545680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.545692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.545874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.545961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.546167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.546205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.546397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.546431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.546563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.546595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.546728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.546761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.546974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.547008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.547197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.547229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.547467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.547500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.547680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.547712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.547965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.548000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.548102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.548118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.548202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.548217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.548368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.548401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.548512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.548554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.548798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.548831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.548962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.548995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.549135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.549167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.549293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.549325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.549519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.549535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.549747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.549779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.550038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.550072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.550336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.550369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.550609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.550641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.550832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.550866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.551053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.551070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.551279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.551311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.551592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.621 [2024-11-26 07:38:09.551625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.621 qpair failed and we were unable to recover it. 00:28:41.621 [2024-11-26 07:38:09.551898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.551931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.552133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.552165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.552351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.552384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.552502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.552517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.552749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.552780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.552961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.552994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.553236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.553269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.553463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.553496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.553667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.553683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.553826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.553858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.554104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.554138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.554316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.554349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.554612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.554644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.554769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.554803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.554923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.554967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.555150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.555182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.555367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.555400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.555589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.555621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.555885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.555916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.556173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.556205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.556419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.556452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.556633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.556665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.556785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.556817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.557010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.557044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.557220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.557252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.557447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.557480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.557666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.557709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.557824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.557857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.558035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.558058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.558151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.558167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.558304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.558338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.558581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.558613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.558790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.558822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.622 [2024-11-26 07:38:09.558931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.622 [2024-11-26 07:38:09.558972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.622 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.559187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.559220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.559414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.559429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.559587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.559620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.559806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.559839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.560045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.560078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.560313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.560345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.560548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.560565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.560723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.560754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.560997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.561032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.561154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.561187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.561386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.561402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.561555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.561589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.561857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.561890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.562103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.562138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.562321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.562354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.562557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.562590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.562798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.562831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.563099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.563133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.563394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.563428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.563552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.563568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.563802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.563835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.564009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.564042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.564179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.564213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.564410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.564443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.564574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.564589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.564823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.564855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.565050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.565085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.565230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.565264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.565530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.565562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.565681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.565714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.565845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.565878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.566078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.566113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.566353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.566386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.566582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.566615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.566880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.566912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.567201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.567235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.567364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.567397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.623 qpair failed and we were unable to recover it. 00:28:41.623 [2024-11-26 07:38:09.567629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.623 [2024-11-26 07:38:09.567645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.567742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.567775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.567892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.567925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.568075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.568109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.568301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.568344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.568492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.568508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.568727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.568743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.568893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.568910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.569051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.569067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.569154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.569170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.569324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.569340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.569439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.569455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.569619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.569652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.569852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.569885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.570079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.570113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.570234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.570267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.570381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.570413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.570654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.570687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.570902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.570935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.571120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.571136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.571285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.571301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.571536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.571569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.571696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.571735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.571861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.571893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.572151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.572184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.572375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.572408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.572617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.572633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.572735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.572767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.572944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.572987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.573113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.573145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.573290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.573322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.573443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.573486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.573736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.573752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.573900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.573916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.574148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.574182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.574305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.574338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.574538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.574572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.574663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.624 [2024-11-26 07:38:09.574678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.624 qpair failed and we were unable to recover it. 00:28:41.624 [2024-11-26 07:38:09.574943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.574984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.575173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.575206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.575393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.575426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.575604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.575636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.575817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.575849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.576035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.576068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.576324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.576357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.576494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.576526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.576795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.576826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.577002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.577036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.577243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.577274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.577501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.577534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.577732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.577766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.578036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.578070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.578251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.578283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.578528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.578561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.578751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.578784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.578998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.579031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.579273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.579289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.579481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.579497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.579583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.579599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.579736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.579752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.579990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.580006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.580210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.580226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.580381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.580399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.580534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.580550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.580732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.580765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.580970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.581002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.581112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.581143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.581319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.581351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.581542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.581574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.581757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.581789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.582032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.582068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.582287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.582321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.582497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.582528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.582650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.582683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.582873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.582905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.625 [2024-11-26 07:38:09.583103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.625 [2024-11-26 07:38:09.583136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.625 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.583333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.583365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.583559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.583591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.583709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.583742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.583916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.583956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.584163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.584196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.584448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.584478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.584741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.584773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.584987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.585021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.585213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.585245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.585431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.585462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.585669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.585702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.585878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.585909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.586065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.586099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.586368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.586401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.586579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.586611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.586802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.586834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.587014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.587048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.587177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.587193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.587412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.587445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.587691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.587723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.587914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.587946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.588137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.588169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.588346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.588379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.588624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.588656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.588870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.588904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.589147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.589180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.589367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.589406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.589591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.589623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.589886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.589918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.590152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.590226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.590435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.590471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.590565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.590581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.590797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.590831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.591028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.591065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.591247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.591280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.591476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.591492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.591598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.591631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.591815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.591847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.591968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.626 [2024-11-26 07:38:09.592003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.626 qpair failed and we were unable to recover it. 00:28:41.626 [2024-11-26 07:38:09.592180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.592221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.592381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.592397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.592616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.592650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.592768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.592801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.592921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.592964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.593096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.593130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.593249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.593282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.593479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.593512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.593747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.593780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.593898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.593931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.594064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.594097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.594285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.594318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.594443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.594476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.594726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.594742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.594822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.594841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.595068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.595085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.595258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.595274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.595403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.595435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.595613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.595645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.595774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.595806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.595997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.596030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.596149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.596183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.596425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.596458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.596654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.596686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.596858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.596891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.597024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.597058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.597192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.597225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.597428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.597461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.597650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.597666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.597761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.597803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.597924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.597974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.598086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.598120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.598233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.598265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.598432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.598447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.598625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.598657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.598764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.598796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.627 [2024-11-26 07:38:09.598905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.627 [2024-11-26 07:38:09.598938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.627 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.599071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.599105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.599345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.599361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.599444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.599460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.599528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.599544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.599618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.599666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.599796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.599829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.599964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.599998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.600190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.600224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.600441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.600475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.600596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.600629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.600846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.600879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.601069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.601104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.601226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.601259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.601436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.601452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.601551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.601567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.601649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.601665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.601920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.601963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.602152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.602186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.602319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.602352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.602586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.602602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.602692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.602708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.602851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.602884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.603024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.603058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.603263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.603296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.603502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.603535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.603742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.603757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.603863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.603895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.604087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.604120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.604259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.604292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.604557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.604589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.604773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.604805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.604945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.604991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.605180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.605213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.605465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.605497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.605619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.605634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.605806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.605822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.605999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.606034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.606172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.606188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.606345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.606374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.606550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.628 [2024-11-26 07:38:09.606583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.628 qpair failed and we were unable to recover it. 00:28:41.628 [2024-11-26 07:38:09.606764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.606796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.606925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.606965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.607211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.607244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.607446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.607462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.607701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.607733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.607868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.607901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.608088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.608122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.608317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.608334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.608478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.608510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.608709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.608742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.608850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.608882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.609124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.609158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.609362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.609395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.609683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.609715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.609893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.609925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.610121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.610153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.610389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.610422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.610617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.610634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.610710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.610726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.610884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.610900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.611061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.611096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.611269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.611301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.611425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.611458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.611647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.611663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.611871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.611903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.612166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.612201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.612409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.612443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.612561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.612593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.612835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.612867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.613041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.613075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.613269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.613301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.613545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.613578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.613836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.613873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.613992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.614027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.614275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.614307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.614602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.614634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.614768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.614801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.615020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.615054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.615254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.615287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.615465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.615481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.615585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.615623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.615894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.615926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.616184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.629 [2024-11-26 07:38:09.616218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.629 qpair failed and we were unable to recover it. 00:28:41.629 [2024-11-26 07:38:09.616412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.616445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.616614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.616630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.616781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.616814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.617085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.617118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.617234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.617266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.617434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.617450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.617646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.617679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.617859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.617891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.618206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.618240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.618483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.618515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.618726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.618759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.618884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.618917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.619190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.619261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.619441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.619453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.619526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.619538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.619753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.619786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.620078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.620123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.620319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.620353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.620650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.620692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.620850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.620862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.621016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.621050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.621241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.621274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.621459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.621492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.621617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.621629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.621777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.621789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.621926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.621938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.622072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.622084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.622218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.622230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.622314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.622326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.622497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.622509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.622720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.622752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.622859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.622891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.623015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.623049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.623257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.623289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.623559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.623590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.623782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.623814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.624031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.624065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.624251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.624284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.624491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.624523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.624700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.624732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.624918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.624958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.625143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.625175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.625356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.630 [2024-11-26 07:38:09.625368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.630 qpair failed and we were unable to recover it. 00:28:41.630 [2024-11-26 07:38:09.625570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.625603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.625777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.625809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.626015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.626048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.626295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.626326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.626520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.626552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.626777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.626788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.627024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.627058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.627170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.627202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.627319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.627351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.627527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.627539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.627737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.627769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.627877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.627909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.628114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.628148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.628407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.628445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.628703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.628735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.628978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.629013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.629205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.629236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.629445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.629477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.629596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.629608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.629678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.629689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.629782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.629814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.630002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.630037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.630301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.630335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.630569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.630580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.630654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.630666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.630747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.630758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.630860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.630892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.631185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.631218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.631400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.631 [2024-11-26 07:38:09.631432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.631 qpair failed and we were unable to recover it. 00:28:41.631 [2024-11-26 07:38:09.631700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.631731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.631845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.631878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.632023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.632056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.632190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.632232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.632363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.632375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.632523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.632554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.632687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.632719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.632987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.633022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.633136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.633169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.633383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.633415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.633679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.633711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.634019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.634054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.634174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.634207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.634394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.634427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.634622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.634654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.634833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.634864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.635054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.635088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.635271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.635303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.635483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.635495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.635646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.635677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.635869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.635901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.636097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.636130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.636397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.636430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.636666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.636678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.636881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.636895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.637028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.637040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.637191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.637224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.637418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.637451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.637569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.637601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.637795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.637827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.637954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.637985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.638177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.638209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.638443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.638454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.638615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.638627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.638784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.638795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.638936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.638950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.639098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.639110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.639251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.639263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.632 qpair failed and we were unable to recover it. 00:28:41.632 [2024-11-26 07:38:09.639348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.632 [2024-11-26 07:38:09.639360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.639571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.639603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.639849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.639881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.640065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.640098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.640218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.640249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.640501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.640532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.640725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.640737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.640961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.640973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.641049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.641061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.641143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.641187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.641365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.641397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.641523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.641555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.641733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.641765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.641993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.642027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.642160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.642192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.642378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.642411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.633 [2024-11-26 07:38:09.642673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.633 [2024-11-26 07:38:09.642705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.633 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.642907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.642919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.643091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.643103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.643187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.643199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.643338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.643349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.643476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.643488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.643551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.643579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.643773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.643785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.643985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.643998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.644137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.644148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.644293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.644307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.644448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.644460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.644535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.644547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.644702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.644714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.644773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.644785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.644853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.644864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.645060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.645073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.645168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.645180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.645246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.645258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.645397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.645409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.645492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.645504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.645575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.645587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.645652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.645664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.645883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.645895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.646043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.646056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.646189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.646202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.646327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.646339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.646415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.646428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.646554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.646565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.646705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.646717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.646813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.646824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.646910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.646922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.647126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.647138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.647231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.647243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.647312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.647324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.647481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.647493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.647575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.647587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.647815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.647853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.648042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.648060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.648209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.648226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.648312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.648327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.648516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.648562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.648746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.648780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.648977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.649011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.649220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.649252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.649458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.649491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.649615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.649649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.649754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.649786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.650067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.650102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.650242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.917 [2024-11-26 07:38:09.650276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.917 qpair failed and we were unable to recover it. 00:28:41.917 [2024-11-26 07:38:09.650520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.650561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.650746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.650762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.650993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.651007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.651182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.651193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.651329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.651367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.651503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.651536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.651670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.651702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.651986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.652018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.652265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.652297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.652531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.652542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.652760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.652772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.652861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.652873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.652963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.652991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.653137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.653149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.653308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.653341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.653581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.653613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.653805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.653837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.654129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.654163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.654288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.654320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.654583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.654614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.654783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.654794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.654970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.655004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.655199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.655232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.655441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.655473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.655644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.655657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.655819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.655851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.656049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.656081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.656260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.656294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.656430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.656442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.656586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.656597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.656723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.656735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.656825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.656836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.656989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.657001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.657074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.657086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.657216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.657248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.657361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.657406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.657563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.657575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.657720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.657751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.657861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.657893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.658158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.658192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.658441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.658484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.658728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.658760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.658938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.658952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.659039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.659051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.659267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.659278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.659340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.659352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.659485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.659518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.659637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.659669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.659802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.659834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.659964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.659997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.660200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.660231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.660361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.918 [2024-11-26 07:38:09.660393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.918 qpair failed and we were unable to recover it. 00:28:41.918 [2024-11-26 07:38:09.660526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.660557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.660766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.660799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.661051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.661085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.661284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.661316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.661451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.661462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.661542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.661554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.661707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.661739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.661981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.662014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.662213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.662247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.662443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.662475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.662604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.662635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.662754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.662793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.662990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.663002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.663132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.663143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.663216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.663228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.663302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.663313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.663451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.663463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.663614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.663645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.663898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.663930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.664208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.664243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.664350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.664361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.664450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.664462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.664596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.664628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.664844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.664875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.665080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.665113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.665299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.665330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.665509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.665540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.665678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.665689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.665761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.665775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.665917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.665979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.666090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.666122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.666234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.666265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.666476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.666508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.666731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.666743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.666871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.666882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.666963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.666975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.667119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.667131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.667241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.667274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.667396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.667428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.667549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.667581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.667794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.667826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.668103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.668137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.668341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.668374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.668510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.668543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.668653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.668685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.668865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.668895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.669019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.669052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.669173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.669205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.669468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.669499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.669690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.669722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.669824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.669836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.669987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.919 [2024-11-26 07:38:09.669999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.919 qpair failed and we were unable to recover it. 00:28:41.919 [2024-11-26 07:38:09.670168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.670179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.670281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.670316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.670387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.670398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.670471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.670483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.670626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.670659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.670837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.670869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.670986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.671020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.671213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.671244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.671430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.671442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.671647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.671679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.671865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.671897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.672095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.672128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.672371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.672403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.672542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.672573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.672725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.672757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.672941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.672982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.673248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.673279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.673384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.673396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.673580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.673592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.673768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.673780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.673868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.673901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.674152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.674185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.674365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.674397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.674508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.674540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.674724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.674757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.674915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.674927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.675083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.675117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.675316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.675348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.675596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.675627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.675895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.675928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.676084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.676117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.676360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.676392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.676633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.676665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.676884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.676896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.677047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.677082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.677275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.677307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.677485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.677516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.677647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.677669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.677808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.677820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.677891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.677902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.678108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.678122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.678270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.678282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.678358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.678370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.678449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.678463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.678526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.678538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.678683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.920 [2024-11-26 07:38:09.678695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.920 qpair failed and we were unable to recover it. 00:28:41.920 [2024-11-26 07:38:09.678787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.678799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.678881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.678893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.679030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.679064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.679234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.679268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.679391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.679423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.679530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.679563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.679764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.679796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.679990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.680022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.680203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.680236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.680470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.680482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.680637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.680675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.680791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.680822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.681013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.681047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.681243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.681275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.681526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.681559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.681801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.681833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.682008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.682041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.682162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.682195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.682385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.682417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.682594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.682625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.682742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.682774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.682924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.682935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.683079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.683090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.683158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.683169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.683397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.683430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.683700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.683732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.683827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.683838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.683980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.683992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.684056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.684068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.684131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.684142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.684225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.684236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.684513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.684545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.684677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.684708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.684988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.685016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.685103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.685114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.685213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.685225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.685467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.685479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.685582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.685619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.685805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.685838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.685968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.686001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.686180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.686212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.686345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.686377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.686553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.686565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.686704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.686716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.686942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.687006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.687251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.687283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.687475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.687508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.687698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.687730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.687994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.688027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.688162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.688193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.688385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.688417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.921 [2024-11-26 07:38:09.688622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.921 [2024-11-26 07:38:09.688654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.921 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.688799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.688831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.689037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.689071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.689252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.689284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.689468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.689500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.689673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.689715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.689874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.689885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.690056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.690068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.690280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.690312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.690504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.690536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.690715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.690755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.690919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.690930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.691128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.691140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.691305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.691337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.691602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.691634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.691922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.691959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.692081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.692113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.692251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.692284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.692418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.692430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.692555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.692566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.692636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.692648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.692867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.692878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.693023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.693035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.693109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.693141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.693333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.693365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.693562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.693594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.693783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.693796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.693956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.693969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.694056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.694070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.694229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.694261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.694391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.694423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.694534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.694566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.694762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.694799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.695023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.695035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.695172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.695183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.695307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.695339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.695617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.695649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.695774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.695806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.696054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.696087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.696352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.696384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.696579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.696591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.696819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.696850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.697139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.697172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.697356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.697387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.697647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.697659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.697732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.697744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.697879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.697890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.697993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.698005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.698154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.698166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.698307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.698318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.698410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.698421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.698559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.698598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.922 [2024-11-26 07:38:09.698782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.922 [2024-11-26 07:38:09.698814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.922 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.699015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.699049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.699301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.699333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.699468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.699479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.699564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.699575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.699751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.699783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.699974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.700006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.700202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.700233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.700356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.700388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.700570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.700601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.700885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.700917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.701042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.701076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.701283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.701316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.701435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.701466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.701652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.701689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.701814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.701846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.701968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.702000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.702186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.702218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.702413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.702445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.702579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.702610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.702868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.702901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.703063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.703096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.703218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.703250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.703489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.703526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.703662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.703674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.703747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.703758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.703885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.703896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.704033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.704045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.704115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.704127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.704190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.704201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.704368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.704401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.704528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.704561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.704667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.704698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.704876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.704888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.705050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.705063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.705205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.705216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.705456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.705468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.705542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.705553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.705701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.705712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.705856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.705887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.706018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.706052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.706233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.706267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.706444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.706477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.706607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.706639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.706757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.706789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.706989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.707023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.707236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.707268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.923 [2024-11-26 07:38:09.707450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.923 [2024-11-26 07:38:09.707482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.923 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.707590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.707621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.707803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.707835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.708075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.708108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.708248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.708280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.708468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.708500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.708710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.708742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.708865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.708902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.709076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.709110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.709379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.709411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.709606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.709637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.709827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.709859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.710006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.710040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.710166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.710198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.710445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.710477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.710688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.710720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.710927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.710967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.711162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.711195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.711388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.711420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.711607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.711640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.711835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.711867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.712062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.712096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.712292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.712323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.712516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.712548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.712725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.712757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.712929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.712970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.713148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.713179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.713364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.713395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.713581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.713613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.713795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.713828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.714012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.714045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.714220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.714251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.714427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.714459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.714739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.714771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.714994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.715030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.715297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.715329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.715575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.715606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.715781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.715812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.716037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.716049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.716120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.716131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.716269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.716280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.716482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.716494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 895701 Killed "${NVMF_APP[@]}" "$@" 00:28:41.924 [2024-11-26 07:38:09.716716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.716728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.716950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.716962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.717187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.717199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.717291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.717303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:41.924 [2024-11-26 07:38:09.717363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.717377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.717574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.717586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:41.924 [2024-11-26 07:38:09.717746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.717759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.717883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.717895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 [2024-11-26 07:38:09.717992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.718004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.924 qpair failed and we were unable to recover it. 00:28:41.924 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:41.924 [2024-11-26 07:38:09.718081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.924 [2024-11-26 07:38:09.718093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.718155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.718167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.718298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.718310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.718453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.718465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.718543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.718555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:41.925 [2024-11-26 07:38:09.718726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.718739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.718821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.718833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.719067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.719079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.719191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.719203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.719362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.719374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.719467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.719479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.719612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.719624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.719687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.719698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.719788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.719800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.719997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.720010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.720075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.720087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.720239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.720250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.720398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.720409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.720476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.720488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.720565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.720577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.720707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.720721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.720875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.720886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.721030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.721042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.721184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.721196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.721258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.721269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.721433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.721446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.721531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.721544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.721616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.721628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.721690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.721701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.721778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.721790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.721876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.721887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.721949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.721962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.722119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.722131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.722198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.722209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.722354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.722365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.722448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.722458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.722596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.722608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.722695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.722706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.722790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.722802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.722866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.722878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.722952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.722964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.723112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.723125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.723204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.723216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.723343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.723355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.723499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.723511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.723688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.723699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.723806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.723818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.723880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.723892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.723969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.723981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.724040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.724052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.724203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.925 [2024-11-26 07:38:09.724215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.925 qpair failed and we were unable to recover it. 00:28:41.925 [2024-11-26 07:38:09.724346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.724357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.724491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.724503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.724727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.724739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.724943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.724959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.725104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.725116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.725281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.725293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=896417 00:28:41.926 [2024-11-26 07:38:09.725390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.725402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.725474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.725485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.725640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.725654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 896417 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.725751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.725764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:41.926 [2024-11-26 07:38:09.725842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.725855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.725943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.725959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.726041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.726053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 896417 ']' 00:28:41.926 [2024-11-26 07:38:09.726200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.726212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.726279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.726290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.926 [2024-11-26 07:38:09.726418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.726431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.726501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.726513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.726652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.926 [2024-11-26 07:38:09.726664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.726793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.726805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.726902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.726914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.926 [2024-11-26 07:38:09.726992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.727005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.727136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.727148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.926 [2024-11-26 07:38:09.727281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.727294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.727371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.727382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:41.926 [2024-11-26 07:38:09.727526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.727539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.727607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.727619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.727791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.727803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.727942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.727960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.728153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.728165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.728308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.728320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.728486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.728498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.728634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.728649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.728724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.728736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.728828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.728839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.728915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.728927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.729013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.729025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.729166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.729179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.729387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.729400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.729482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.729493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.729673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.729685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.729827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.729839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.729978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.729990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.730120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.730132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.730292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.730304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.730395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.730407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.730489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.730501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.730643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.926 [2024-11-26 07:38:09.730656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.926 qpair failed and we were unable to recover it. 00:28:41.926 [2024-11-26 07:38:09.730781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.730793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.730865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.730877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.731006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.731018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.731100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.731112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.731202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.731213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.731344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.731357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.731502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.731514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.731591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.731602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.731757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.731769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.731986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.731998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.732084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.732097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.732160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.732173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.732244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.732255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.732402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.732415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.732495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.732506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.732589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.732601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.732753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.732765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.732997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.733009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.733164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.733176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.733244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.733255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.733321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.733333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.733429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.733440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.733503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.733514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.733592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.733604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.733682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.733697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.733778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.733790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.733858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.733871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.734003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.734015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.734173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.734185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.734319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.734331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.734529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.734542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.734619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.734631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.734707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.734719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.734776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.734792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.734943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.734960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.735036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.735048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.735122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.735134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.735235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.735246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.735377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.735389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.735522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.735534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.735621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.735632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.735803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.735815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.735883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.735895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.735979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.735991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.736075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.736087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.736170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.736182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.736254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.736266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.927 [2024-11-26 07:38:09.736416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.927 [2024-11-26 07:38:09.736428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.927 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.736513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.736525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.736607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.736619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.736681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.736693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.736754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.736766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.736935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.736949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.737099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.737111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.737241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.737252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.737325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.737337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.737408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.737420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.737481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.737493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.737574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.737586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.737651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.737664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.737747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.737758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.737827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.737839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.737900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.737912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.738047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.738060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.738147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.738162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.738227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.738239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.738321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.738333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.738473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.738485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.738614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.738625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.738707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.738720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.738788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.738800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.738889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.738901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.739142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.739155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.739296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.739308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.739440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.739452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.739539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.739550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.739634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.739645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.739843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.739855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.739945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.739962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.740094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.740105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.740241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.740253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.740468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.740480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.740570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.740582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.740658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.740670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.740745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.740757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.740915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.740926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.741068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.741081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.741155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.741167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.741257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.741270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.741344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.741355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.741419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.741431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.741562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.741575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.741770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.741782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.741929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.741941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.742006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.742019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.742161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.742173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.742256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.742268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.742353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.742365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.742428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.742439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.928 [2024-11-26 07:38:09.742587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.928 [2024-11-26 07:38:09.742599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.928 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.742740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.742751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.742835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.742847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.742918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.742930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.743010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.743022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.743114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.743129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.743259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.743271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.743347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.743359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.743511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.743522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.743597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.743609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.743825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.743836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.744008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.744020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.744185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.744196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.744259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.744270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.744409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.744420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.744486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.744497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.744565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.744576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.744657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.744669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.744882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.744894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.744965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.744977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.745050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.745063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.745130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.745141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.745277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.745289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.745483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.745495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.745565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.745577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.745744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.745756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.745895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.745908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.745986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.745998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.746076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.746089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.746153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.746165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.746226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.746238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.746384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.746396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.746531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.746542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.746616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.746628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.746713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.746724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.746784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.746796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.746862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.746874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.747007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.747019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.747091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.747103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.747238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.747250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.747329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.747341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.747408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.747420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.747482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.747494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.747554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.747566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.747703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.747716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.747794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.747808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.747998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.748011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.748136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.748148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.748294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.748306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.748368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.748380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.748507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.748519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.748600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.748612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.748737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.929 [2024-11-26 07:38:09.748749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.929 qpair failed and we were unable to recover it. 00:28:41.929 [2024-11-26 07:38:09.748895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.748906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.749046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.749058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.749256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.749269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.749359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.749370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.749503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.749516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.749733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.749745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.749970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.749982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.750061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.750072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.750205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.750217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.750388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.750399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.750478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.750490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.750625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.750637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.750783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.750796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.750924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.750936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.751225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.751261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.751428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.751446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.751534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.751551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.751637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.751652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.751871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.751887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.752070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.752106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.752197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.752211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.752286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.752297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.752441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.752453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.752573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.752584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.752786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.752798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.752868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.752879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.753022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.753035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.753090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.753101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.753245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.753256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.753344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.753355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.753485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.753497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.753694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.753707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.753854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.753867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.753928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.753939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.754028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.754039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.754198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.754210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.754341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.754352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.754432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.754443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.754620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.754633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.754708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.754720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.754939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.754961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.755037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.755048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.755121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.755133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.755197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.755209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.755293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.755304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.755379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.755390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.755473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.755485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.930 qpair failed and we were unable to recover it. 00:28:41.930 [2024-11-26 07:38:09.755544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.930 [2024-11-26 07:38:09.755555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.755637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.755650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.755808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.755820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.756043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.756055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.756121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.756133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.756273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.756285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.756356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.756368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.756617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.756629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.756708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.756720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.756858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.756870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.756935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.756950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.757175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.757187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.757351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.757370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.757448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.757464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.757648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.757664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.757846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.757861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.757967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.757984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.758129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.758145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.758224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.758240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.758380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.758396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.758623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.758639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.758718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.758734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.758871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.758887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.758967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.758984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.759164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.759181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.759325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.759346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.759563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.759578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.759808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.759824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.759927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.759942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.760047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.760063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.760217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.760232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.760373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.760388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.760566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.760582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.760804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.760819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.760998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.761015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.761111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.761127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.761203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.761218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.761316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.761331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.761410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.761426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.761640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.761653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.761808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.761820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.761908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.761920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.762048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.762060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.762203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.762215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.762378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.762389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.762518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.762529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.762619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.762631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.762711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.762723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.762800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.762812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.762894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.762905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.763036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.763049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.763124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.931 [2024-11-26 07:38:09.763136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.931 qpair failed and we were unable to recover it. 00:28:41.931 [2024-11-26 07:38:09.763366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.763403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.763508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.763527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.763603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.763618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.763715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.763730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.763866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.763882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.763971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.763987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.764074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.764089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.764237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.764253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.764401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.764416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.764501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.764517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.764654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.764669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.764754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.764770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.764930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.764945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.765086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.765102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.765309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.765326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.765394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.765409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.765506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.765522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.765609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.765624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.765707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.765723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.765813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.765828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.765976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.765992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.766131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.766147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.766355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.766370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.766507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.766523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.766740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.766757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.766901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.766917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.767006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.767022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.767107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.767123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.767262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.767278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.767367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.767383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.767536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.767552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.767709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.767724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.767872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.767887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.768060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.768077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.768158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.768174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.768352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.768368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.768446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.768462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.768693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.768708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.768784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.768799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.769010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.769026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.769165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.769187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.769407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.769423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.769523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.769538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.769684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.769700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.769787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.769803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.769898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.769914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.770061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.770078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.770161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.770177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.770278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.770293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.770402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.770418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.770578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.770594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.770670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.932 [2024-11-26 07:38:09.770685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.932 qpair failed and we were unable to recover it. 00:28:41.932 [2024-11-26 07:38:09.770825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.770840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.771002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.771019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.771121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.771137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.771217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.771233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.771318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.771334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.771485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.771501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.771573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.771588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.771664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.771679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.771781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.771796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.771943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.771966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.772061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.772077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.772221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.772237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.772323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.772338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.772483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.772499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.772714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.772730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.772842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.772858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.772996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.773012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.773216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.773232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.773385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.773400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.773584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.773599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.773689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.773704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.773851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.773867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.774008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.774025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.774170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.774186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.774388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.774404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.774490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.774506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.774586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.774602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.774672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.774688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.774841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.774859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.774999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.775014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.775167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.775182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.775335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.775350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.775497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.775513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.775597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.775613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.775851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.775867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.775934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.775955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.776094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.776110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.776249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.776266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.776334] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:28:41.933 [2024-11-26 07:38:09.776377] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.933 [2024-11-26 07:38:09.776415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.776429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.776495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.776509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.776656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.776673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.776842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.776855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.777013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.777028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.777235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.777251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.777324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.777340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.777508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.777524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.777618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.933 [2024-11-26 07:38:09.777634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.933 qpair failed and we were unable to recover it. 00:28:41.933 [2024-11-26 07:38:09.777804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.777820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.777964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.777981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.778114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.778130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.778278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.778294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.778454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.778470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.778612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.778629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.778853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.778870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.778956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.778973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.779077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.779101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.779191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.779207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.779439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.779456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.779534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.779549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.779710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.779726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.779877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.779893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.780047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.780063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.780269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.780285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.780377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.780392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.780532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.780548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.780641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.780656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.780747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.780762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.781021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.781040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.781203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.781220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.781376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.781391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.781490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.781506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.781650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.781666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.781833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.781849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.782015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.782031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.782283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.782299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.782457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.782473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.782645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.782661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.782746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.782762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.782901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.782916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.783106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.783123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.783200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.783220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.783356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.783372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.783518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.783534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.783626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.783642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.783841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.783857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.783992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.784009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.784087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.784103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.784244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.784262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.784352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.784368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.784596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.784612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.784766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.784781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.784853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.784869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.784972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.784988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.785129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.785145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.785295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.785312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.785397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.785414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.785567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.785583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.785670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.785687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.934 [2024-11-26 07:38:09.785781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.934 [2024-11-26 07:38:09.785797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.934 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.785880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.785896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.785973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.785989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.786127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.786143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.786289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.786305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.786471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.786487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.786579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.786595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.786695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.786712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.786786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.786803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.786955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.786970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.787049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.787061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.787123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.787135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.787216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.787227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.787308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.787321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.787453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.787464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.787606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.787618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.787762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.787775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.787845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.787857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.787929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.787940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.788019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.788031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.788103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.788115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.788173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.788184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.788251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.788264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.788358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.788370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.788447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.788460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.788525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.788536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.788683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.788696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.788826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.788838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.788918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.788930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.789085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.789098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.789161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.789173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.789250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.789262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.789395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.789407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.789486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.789497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.789571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.789583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.789662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.789675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.789754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.789766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.789835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.789846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.789903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.789916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.789989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.790001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.790084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.790096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.790293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.790305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.790394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.790406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.790469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.790481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.790614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.790626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.790701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.790714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.790804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.790816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.790956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.790968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.791127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.791138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.791308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.791320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.791453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.791464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.791543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.791554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.791618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.935 [2024-11-26 07:38:09.791630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.935 qpair failed and we were unable to recover it. 00:28:41.935 [2024-11-26 07:38:09.791688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.791699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.791839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.791851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.791932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.791944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.792175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.792188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.792320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.792333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.792409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.792421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.792492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.792504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.792634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.792646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.792739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.792750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.792875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.792890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.792981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.792993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.793080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.793092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.793154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.793167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.793246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.793257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.793386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.793399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.793485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.793497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.793563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.793575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.793728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.793740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.793869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.793881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.794025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.794038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.794169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.794181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.794327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.794339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.794476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.794488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.794718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.794729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.794808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.794820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.794883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.794894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.795086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.795098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.795161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.795173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.795330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.795342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.795471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.795484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.795547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.795558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.795753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.795765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.795960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.795973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.796053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.796065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.796258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.796270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.796408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.796420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.796579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.796598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.796696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.796712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.796861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.796877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.796974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.796991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.797142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.797158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.797233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.797249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.797400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.797416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.797495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.797510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.797596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.797611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.797764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.797780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.797869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.797885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.797967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.797984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.798056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.798072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.798207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.798226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.798310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.798325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.936 qpair failed and we were unable to recover it. 00:28:41.936 [2024-11-26 07:38:09.798483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.936 [2024-11-26 07:38:09.798499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.798652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.798668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.798771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.798786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.798879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.798896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.798970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.798986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.799070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.799085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.799306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.799322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.799530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.799545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.799713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.799729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.799874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.799889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.800046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.800063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.800212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.800228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.800380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.800396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.800550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.800566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.800765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.800780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.800932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.800951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.801102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.801118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.801350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.801366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.801520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.801536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.801606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.801622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.801714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.801730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.801871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.801887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.802027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.802044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.802201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.802217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.802323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.802339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.802506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.802525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.802680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.802696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.802782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.802798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.802968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.802984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.803132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.803149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.803303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.803319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.803415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.803430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.803640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.803656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.803759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.803775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.803927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.803942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.804031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.804047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.804253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.804269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.804419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.804434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.804518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.804537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.804778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.804794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.804900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.804915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.805059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.805075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.805217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.805233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.805329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.805345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.805481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.805497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.805680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.805696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.805911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.805927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-26 07:38:09.806020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.937 [2024-11-26 07:38:09.806036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.806213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.806228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.806375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.806390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.806477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.806492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.806560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.806576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.806810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.806826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.806974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.806990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.807074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.807090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.807234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.807250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.807400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.807416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.807575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.807590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.807748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.807764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.807933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.807954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.808093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.808109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.808251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.808266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.808418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.808434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.808665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.808681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.808767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.808783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.808874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.808893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.808995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.809012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.809166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.809181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.809334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.809350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.809451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.809467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.809562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.809578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.809719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.809735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.809996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.810013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.810140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.810152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.810231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.810243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.810312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.810324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.810388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.810400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.810476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.810488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.810567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.810578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.810714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.810726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.810823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.810835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.810979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.810991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.811072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.811084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.811216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.811228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.811287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.811299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.811379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.811391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.811520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.811532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.811693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.811705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.811787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.811799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.811875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.811886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.812026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.812038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.812133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.812144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.812237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.812248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.812446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.812458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.812586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.812598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.812676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.812687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.812932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.812943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.813117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.813129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.813214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.813226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-26 07:38:09.813309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.938 [2024-11-26 07:38:09.813321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.813405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.813417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.813494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.813507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.813657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.813669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.813741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.813753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.813828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.813840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.813987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.814002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.814071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.814083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.814219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.814231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.814303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.814314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.814462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.814474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.814606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.814617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.814686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.814698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.814770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.814782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.814922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.814933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.815095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.815107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.815194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.815206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.815336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.815347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.815436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.815448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.815590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.815602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.815678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.815690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.815776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.815788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.815869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.815881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.815955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.815968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.816037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.816049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.816133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.816145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.816231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.816244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.816379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.816391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.816522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.816534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.816665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.816677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.816871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.816883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.816972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.816984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.817073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.817086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.817229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.817241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.817389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.817402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.817493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.817504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.817646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.817658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.817734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.817746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.817823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.817835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.817977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.817989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.818073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.818085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.818229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.818240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.818320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.818333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.818408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.818419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.818485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.818497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.818589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.818600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.818673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.818687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.818768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.818780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.818916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.818928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.819069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.819082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.819163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.819176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-26 07:38:09.819238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.939 [2024-11-26 07:38:09.819250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.819483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.819495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.819668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.819680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.819769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.819781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.819849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.819861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.819951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.819963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.820043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.820056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.820249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.820261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.820325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.820338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.820417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.820429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.820499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.820510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.820571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.820583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.820730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.820741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.820870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.820882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.820965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.820977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.821049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.821061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.821190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.821202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.821272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.821284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.821372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.821385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.821511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.821523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.821592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.821604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.821675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.821687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.821747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.821759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.821917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.821930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.822027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.822039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.822180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.822192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.822338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.822350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.822419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.822431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.822565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.822577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.822647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.822659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.822739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.822751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.822891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.822904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.823046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.823058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.823219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.823231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.823373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.823385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.823446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.823460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.823610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.823621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.823816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.823828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.823912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.823925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.823997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.824009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.824087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.824099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.824166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.824177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.824321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.824333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.824473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.824485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.824578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.824590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.824719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.824730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.824795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.824807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.824870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.940 [2024-11-26 07:38:09.824881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-26 07:38:09.825054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.825066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.825143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.825155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.825284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.825296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.825374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.825385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.825528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.825541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.825706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.825717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.825790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.825802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.825869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.825881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.825954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.825966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.826047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.826059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.826125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.826137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.826207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.826218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.826452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.826465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.826531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.826543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.826621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.826633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.826707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.826719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.826864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.826876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.826950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.826962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.827055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.827067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.827144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.827156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.827234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.827246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.827310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.827321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.827470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.827482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.827547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.827559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.827724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.827736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.827880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.827892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.828054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.828066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.828292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.828306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.828386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.828398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.828494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.828506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.828598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.828610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.828745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.828757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.828835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.828847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.829012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.829024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.829223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.829235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.829366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.829378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.829451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.829463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.829535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.829547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.829637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.829648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.829781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.829793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.829929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.829941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.830095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.830107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.830179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.830190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.830315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.830327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.830487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.830499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.830564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.830576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.830638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.830649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.830709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.830721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.830795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.830807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.830935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.830951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.831044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.831055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.941 qpair failed and we were unable to recover it. 00:28:41.941 [2024-11-26 07:38:09.831221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.941 [2024-11-26 07:38:09.831234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.831367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.831379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.831456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.831469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.831545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.831556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.831715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.831727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.831797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.831808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.831995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.832007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.832084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.832095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.832222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.832234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.832310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.832322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.832383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.832394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.832472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.832484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.832563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.832575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.832706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.832718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.832781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.832792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.832942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.832969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.833108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.833122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.833197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.833209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.833269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.833281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.833362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.833373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.833451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.833463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.833550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.833561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.833719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.833731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.833929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.833941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.834928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.834940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.835020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.835033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.835187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.835198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.835338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.835350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.835547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.835559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.835634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.835645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.835707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.835718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.835781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.835792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.835871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.835882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.836082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.836094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.836269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.836280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.836345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.836356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.836499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.836511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.836711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.836723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.836804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.836817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.836888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.836900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.837096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.942 [2024-11-26 07:38:09.837108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.942 qpair failed and we were unable to recover it. 00:28:41.942 [2024-11-26 07:38:09.837199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.837211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.837350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.837361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.837441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.837452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.837582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.837594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.837742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.837755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.837955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.837968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.838030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.838043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.838124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.838137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.838268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.838280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.838371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.838383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.838461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.838473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.838544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.838556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.838712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.838724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.838808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.838819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.838966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.838979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.839053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.839064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.839142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.839154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.839281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.839292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.839371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.839383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.839451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.839463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.839615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.839627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.839696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.839708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.839773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.839784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.840002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.840014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.840150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.840162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.840241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.840253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.840446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.840459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.840536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.840548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.840693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.840705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.840779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.840791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.840969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.840981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.841069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.841081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.841154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.841165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.841309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.841321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.841461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.841473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.841613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.841625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.841713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.841725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.841866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.841878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.841960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.841972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.842217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.842229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.842302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.842314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.842391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.842403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.842469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.842480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.842551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.842563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.842694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.842707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.842843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.842855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.842987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.843000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.943 [2024-11-26 07:38:09.843074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.943 [2024-11-26 07:38:09.843087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.943 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.843145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.843156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.843296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.843308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.843436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.843451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.843601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.843612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.843696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.843706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.843777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.843787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.843850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.843860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.844008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.844019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.844080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.844090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.844152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.844162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.844367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.844378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.844512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.844523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.844648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.844659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.844800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.844811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.844876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.844886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.844965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.844976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.845040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.845050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.845142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.845153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.845299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.845310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.845369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.845379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.845499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.845509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.845656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.845667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.845743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.845754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.845838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.845859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.845939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.845960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.846141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.846157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.846243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.846257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.846334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.846349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.846503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.846518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.846600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.846615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.846755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.846770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.846951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.846967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.847053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.847067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.847148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.847162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.847296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.847312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.847457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.847472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.847578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.847599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.847740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.847754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.847911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.847923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.848062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.848074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.848232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.848243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.848386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.848397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.848560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.848572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.848646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.848657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.848800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.848811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.848903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.848913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.849157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.849168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.849315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.849325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.849463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.849473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.849677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.849687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.849826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.944 [2024-11-26 07:38:09.849836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.944 qpair failed and we were unable to recover it. 00:28:41.944 [2024-11-26 07:38:09.849979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.849990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.850061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.850071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.850216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.850226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.850320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.850330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.850529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.850539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.850624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.850634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.850785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.850795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.850919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.850929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.851067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.851078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.851163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.851174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.851243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.851253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.851331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.851340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.851415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.851426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.851503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.851513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.851645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.851655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.851790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.851800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.851953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.851964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.852087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.852098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.852177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.852187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.852273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.852284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.852346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.852356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.852428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.852439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.852501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.852511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.852642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.852652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.852792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.852802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.852999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.853011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.853083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.853093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.853172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.853183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.853325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.853335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.853396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.853405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.853468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.853478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.853567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.853577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.853790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.853800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.853885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.853895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.853978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.853988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.854122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.854132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.854209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.854220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.854362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.854371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.854440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.854449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.854533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.854543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.854808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.854818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.854891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.854901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.855031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.855042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.855125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.855136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.855363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.855373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.855515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.855526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.855616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.855625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.855761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.855771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.855917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.855927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.855991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.856002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.856160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.856170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.945 [2024-11-26 07:38:09.856384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.945 [2024-11-26 07:38:09.856395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.945 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.856554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.856564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.856700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.856710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.856853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.856864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.856941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.856955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.857091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.857100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.857235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.857245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.857440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.857450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.857521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.857530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.857608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.857619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.857681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.857690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.857762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.857772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.857970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.857981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.858120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.858130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.858267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.858279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.858349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.858359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.858431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.858441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.858520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.858529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.858620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.858631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.858761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.858771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.858807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:41.946 [2024-11-26 07:38:09.858845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.858855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.858914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.858924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.859054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.859065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.859212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.859223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.859298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.859308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.859453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.859464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.859672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.859683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.859828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.859841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.859988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.859999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.860135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.860145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.860349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.860359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.860445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.860455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.860539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.860549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.860691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.860702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.860797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.860807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.860891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.860901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.861039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.861050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.861142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.861152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.861376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.861387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.861527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.861538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.861681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.861691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.861771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.861782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.861935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.861946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.862099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.862109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.862174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.862184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.862322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.862332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.862528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.862539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.862608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.862618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.862763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.862774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.862846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.862857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.862941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.862964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.863063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.946 [2024-11-26 07:38:09.863074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.946 qpair failed and we were unable to recover it. 00:28:41.946 [2024-11-26 07:38:09.863138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.863148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.863337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.863348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.863462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.863490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.863596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.863619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.863706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.863723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.863876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.863888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.864021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.864032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.864119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.864129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.864260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.864270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.864332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.864342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.864423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.864433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.864604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.864615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.864704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.864715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.864794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.864804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.864957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.864969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.865032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.865042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.865201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.865212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.865297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.865308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.865388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.865398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.865472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.865482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.865552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.865563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.865634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.865645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.865772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.865783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.865858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.865868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.865961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.865972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.866050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.866060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.866133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.866143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.866218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.866229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.866309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.866319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.866399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.866409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.866479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.866490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.866561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.866572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.866696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.866707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.866854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.866866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.866946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.866961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.867127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.867137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.867222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.867232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.867373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.867383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.867447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.867458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.867534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.867544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.867611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.867621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.867764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.867774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.867902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.867915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.867988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.867999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.868133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.868143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.868338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.868349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.868426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.868436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.868582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.947 [2024-11-26 07:38:09.868593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.947 qpair failed and we were unable to recover it. 00:28:41.947 [2024-11-26 07:38:09.868668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.868678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.868740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.868750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.868883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.868893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.869041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.869052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.869186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.869196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.869283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.869294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.869390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.869400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.869596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.869606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.869770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.869781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.869888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.869898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.869965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.869977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.870100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.870111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.870172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.870182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.870243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.870252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.870325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.870336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.870411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.870422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.870567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.870577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.870782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.870792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.870863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.870874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.870950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.870961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.871035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.871045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.871124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.871136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.871281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.871291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.871372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.871382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.871518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.871529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.871674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.871685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.871822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.871832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.871895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.871905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.872029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.872041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.872103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.872115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.872197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.872208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.872280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.872290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.872430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.872439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.872591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.872602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.872796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.872809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.872883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.872893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.872980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.872991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.873198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.873209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.873300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.873310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.873454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.873464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.873540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.873550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.873744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.873753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.873882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.873892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.873955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.873965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.874054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.874065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.874152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.874162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.874299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.874309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.874385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.874394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.874463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.874473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.874544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.874554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.948 [2024-11-26 07:38:09.874691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.948 [2024-11-26 07:38:09.874701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.948 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.874902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.874912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.875056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.875067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.875134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.875145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.875273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.875283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.875344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.875354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.875444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.875453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.875538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.875548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.875716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.875727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.875860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.875870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.875957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.875968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.876035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.876045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.876201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.876212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.876274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.876283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.876365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.876376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.876439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.876450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.876651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.876662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.876726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.876736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.876934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.876944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.877108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.877119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.877190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.877200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.877362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.877372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.877465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.877475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.877673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.877682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.877754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.877766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.877847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.877857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.877985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.877996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.878145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.878155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.878298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.878309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.878436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.878446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.878522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.878532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.878662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.878672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.878746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.878756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.878816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.878825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.879023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.879033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.879104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.879114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.879183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.879193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.879265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.879276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.879352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.879362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.879439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.879449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.879521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.879531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.879683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.879693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.879765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.879775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.879850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.879861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.879932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.879941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.880018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.880029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.880123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.880133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.880194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.880203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.880274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.880284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.880350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.880361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.880498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.880509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.880570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.949 [2024-11-26 07:38:09.880580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.949 qpair failed and we were unable to recover it. 00:28:41.949 [2024-11-26 07:38:09.880735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.880745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.880879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.880889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.881021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.881031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.881110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.881120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.881274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.881283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.881349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.881359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.881441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.881452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.881518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.881528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.881606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.881617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.881709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.881719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.881852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.881863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.881996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.882007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.882172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.882184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.882255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.882265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.882325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.882335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.882411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.882422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.882505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.882516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.882600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.882610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.882694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.882705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.882844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.882854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.883010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.883022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.883111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.883121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.883181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.883191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.883260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.883271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.883400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.883409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.883537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.883546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.883613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.883624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.883838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.883848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.883921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.883931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.884065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.884075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.884153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.884163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.884243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.884252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.884381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.884391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.884465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.884475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.884558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.884569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.884695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.884705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.884796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.884806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.884866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.884876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.884960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.884970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.885064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.885075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.885135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.885145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.885227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.885236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.950 qpair failed and we were unable to recover it. 00:28:41.950 [2024-11-26 07:38:09.885315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.950 [2024-11-26 07:38:09.885325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.885450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.885461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.885540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.885550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.885620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.885630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.885710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.885722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.885855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.885865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.885934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.885946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.886021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.886031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.886156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.886167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.886318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.886328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.886466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.886480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.886620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.886631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.886777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.886788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.886929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.886941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.887029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.887040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.887221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.887233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.887380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.887390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.887461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.887471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.887547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.887558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.887811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.887821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.887957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.887967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.888101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.888112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.888191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.888202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.888340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.888351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.888493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.888503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.888569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.888579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.888643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.888653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.888735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.888745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.888825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.888835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.888917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.888928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.889086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.889097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.889235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.889245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.889397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.889408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.951 qpair failed and we were unable to recover it. 00:28:41.951 [2024-11-26 07:38:09.889480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.951 [2024-11-26 07:38:09.889491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.889579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.889589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.889671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.889681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.889748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.889758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.889829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.889840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.889920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.889929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.890022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.890032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.890107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.890117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.890189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.890199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.890282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.890292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.890348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.890359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.890492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.890504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.890571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.890582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.890777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.890787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.890858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.890868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.891010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.891021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.891175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.891186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.891260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.891273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.891413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.891423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.891503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.891513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.891592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.891602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.891682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.891692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.891832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.891843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.891935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.891952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.892043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.892054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.892253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.892263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.892348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.892358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.892438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.892448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.892641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.892652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.892909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.892920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.893014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.893025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.893173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.893183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.893312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.893322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.893393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.893403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.893485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.893496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.893561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.893572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.893647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.952 [2024-11-26 07:38:09.893657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.952 qpair failed and we were unable to recover it. 00:28:41.952 [2024-11-26 07:38:09.893728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.893738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.893864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.893875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.893966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.893978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.894118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.894128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.894265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.894276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.894351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.894360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.894490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.894500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.894579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.894601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.894762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.894777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.894925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.894940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.895035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.895049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.895216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.895230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.895380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.895395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.895529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.895544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.895646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.895660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.895806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.895822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.895907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.895921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.896076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.896091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.896168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.896182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.896323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.896338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.896479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.896498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.896723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.896738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.896805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.896819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.896911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.896926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.897132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.897148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.897293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.897307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.897460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.897474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.897705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.897719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.897839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.897854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.897954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.897970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.898125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.898140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.898231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.898246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.898341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.898358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.898443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.898458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.898552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.898568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.898650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.898661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.898792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.898804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.953 [2024-11-26 07:38:09.898889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.953 [2024-11-26 07:38:09.898900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.953 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.899041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.899052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.899122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.899133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.899203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.899213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.899294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.899304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.899376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.899386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.899447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.899457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.899541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.899552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.899695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.899706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.899792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.899803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.899904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.899929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.900024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.900041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.900123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.900140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.900212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.900230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.900301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.900316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.900389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.900404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.900550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.900565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.900716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.900728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.900790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.900801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.900863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.900874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.900944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.900960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.901021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.901032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.901162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.901173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.901254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.954 [2024-11-26 07:38:09.901282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.954 [2024-11-26 07:38:09.901289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.954 [2024-11-26 07:38:09.901295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.954 [2024-11-26 07:38:09.901301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.954 [2024-11-26 07:38:09.901305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.901315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.901378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.901388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.901449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.901459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.901523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.901533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.901609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.901619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.901681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.901691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.901765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.901775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.901854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.901865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.901995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.902008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.902072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.902083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.902212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.902222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.902294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.902304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.954 [2024-11-26 07:38:09.902465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.954 [2024-11-26 07:38:09.902476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.954 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.902608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.902619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.902690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.902701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.902777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.902787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.902846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.902857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.902924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.902937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.903010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.903021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.955 [2024-11-26 07:38:09.902919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.903008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:41.955 [2024-11-26 07:38:09.903125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.903065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:41.955 [2024-11-26 07:38:09.903144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.903065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:41.955 [2024-11-26 07:38:09.903223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.903247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.903387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.903402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.903474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.903488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.903560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.903574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.903715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.903730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.903802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.903818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.903958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.903975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.904136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.904151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.904285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.904301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.904395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.904410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.904553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.904569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.904661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.904677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.904767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.904783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.904932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.904951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.905045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.905061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.905229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.905244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.905400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.905415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.905633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.905652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.905732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.905747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.905832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.905847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.906008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.906025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.955 qpair failed and we were unable to recover it. 00:28:41.955 [2024-11-26 07:38:09.906167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.955 [2024-11-26 07:38:09.906183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.906329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.906344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.906440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.906455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.906604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.906620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.906823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.906839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.906922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.906938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.907032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.907047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.907183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.907199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.907269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.907285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.907382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.907397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.907475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.907490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.907570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.907585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.907838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.907853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.907989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.908004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.908143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.908158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.908385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.908401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.908481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.908497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.908636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.908651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.908756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.908772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.908928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.908944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.909027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.909042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.909134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.909150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.909308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.909323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.909424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.909440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.909523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.909538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.909682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.909697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.909770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.909784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.909873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.909888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.910039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.910055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.910192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.910208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.910292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.910307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.910464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.910482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.910638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.910653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.910789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.910804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.910889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.910904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.910989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.911005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.911152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.956 [2024-11-26 07:38:09.911171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.956 qpair failed and we were unable to recover it. 00:28:41.956 [2024-11-26 07:38:09.911243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.911258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.911354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.911369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.911445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.911460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.911530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.911545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.911636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.911653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.911729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.911744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.911879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.911895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.911980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.911996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.912162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.912173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.912251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.912261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.912392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.912402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.912469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.912479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.912680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.912691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.912782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.912793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.912871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.912882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.912957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.912968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.913052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.913062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.913135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.913146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.913240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.913252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.913399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.913410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.913483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.913494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.913635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.913646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.913786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.913798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.913934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.913946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.914027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.914038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.914107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.914117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.914200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.914211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.914304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.914316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.914387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.914398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.914468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.914479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.914606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.914617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.914690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.914700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.914837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.914848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.915046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.915058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.915133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.915145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.915204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.915214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.915285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.915296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.957 [2024-11-26 07:38:09.915369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.957 [2024-11-26 07:38:09.915380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.957 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.915509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.915519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.915605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.915619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.915677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.915687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.915850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.915862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.915935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.915951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.916094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.916105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.916173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.916184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.916317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.916329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.916405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.916416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.916565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.916577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.916735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.916748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.916820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.916832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.916912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.916923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.917063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.917076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.917139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.917149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.917226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.917238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.917393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.917405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.917488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.917500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.917654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.917666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.917888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.917901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.918063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.918077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.918208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.918220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.918304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.918315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.918458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.918469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.918533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.918544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.918603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.918613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.918686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.918697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.918774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.918785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.918921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.918933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.919077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.919090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.919174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.919186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.919314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.919325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.919394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.919405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.919480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.919491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.919554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.919565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.919639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.919650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.919710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.958 [2024-11-26 07:38:09.919720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.958 qpair failed and we were unable to recover it. 00:28:41.958 [2024-11-26 07:38:09.919793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.919805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.919873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.919885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.919959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.919971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.920042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.920053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.920123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.920136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.920263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.920275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.920340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.920351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.920419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.920430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.920510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.920521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.920594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.920604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.920663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.920675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.920828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.920840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.921040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.921054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.921135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.921147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.921291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.921303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.921443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.921455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.921521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.921531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.921603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.921613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.921747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.921759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.921904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.921915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.922047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.922059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.922136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.922147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.922208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.922219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.922290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.922301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.922451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.922462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.922527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.922537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.922601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.922611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.922682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.922692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.922820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.922832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.922896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.922909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.922983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.922994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.923061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.923072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.923143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.959 [2024-11-26 07:38:09.923153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.959 qpair failed and we were unable to recover it. 00:28:41.959 [2024-11-26 07:38:09.923235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.923247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.923338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.923349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.923411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.923421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.923584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.923596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.923670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.923680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.923811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.923822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.923896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.923907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.923983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.923994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.924138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.924150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.924221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.924232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.924301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.924311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.924447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.924460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.924523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.924534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.924594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.924605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.924682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.924693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.924836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.924848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.924979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.924991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.925140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.925152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.925211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.925222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.925291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.925302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.925370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.925382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.925452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.925462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.925538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.925548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.925624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.925635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.925703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.925713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.925792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.925803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.925864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.925875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.925952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.925964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.926036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.926046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.926128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.926139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.926214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.926225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.926297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.926310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.926444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.926456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.926524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.926535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.926612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.926623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.926680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.926691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.926776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.926786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.960 qpair failed and we were unable to recover it. 00:28:41.960 [2024-11-26 07:38:09.926931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.960 [2024-11-26 07:38:09.926943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.927089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.927100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.927270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.927282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.927418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.927429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.927493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.927503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.927570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.927581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.927669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.927679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.927742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.927753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.927835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.927847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.927972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.927983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.928050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.928060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.928279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.928291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.928352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.928362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.928428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.928439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.928563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.928576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.928640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.928651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.928728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.928739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.928800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.928811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.928879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.928889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.929026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.929036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.929104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.929114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.929245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.929256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.929311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.929323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.929397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.929407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.929476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.929486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.929544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.929554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.929631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.929643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.929776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.929787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.929859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.929870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.929932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.929942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.930018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.930029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.930089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.930100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.930166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.930177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.930251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.930261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.930340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.930352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.930492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.930502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.930565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.930576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.961 [2024-11-26 07:38:09.930640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.961 [2024-11-26 07:38:09.930650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.961 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.930724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.930735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.930819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.930830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.930988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.931001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.931066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.931077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.931150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.931162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.931226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.931236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.931310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.931321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.931384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.931396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.931532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.931543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.931689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.931701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.931761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.931772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.931920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.931932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.932012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.932024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.932100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.932111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.932193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.932204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.932284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.932295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.932427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.932441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.932579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.932590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.932654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.932666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.932827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.932838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.932911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.932922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.932996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.933007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.933073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.933083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.933147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.933157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.933229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.933239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.933373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.933384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.933462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.933473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.933553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.933564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.933623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.933633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.933702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.933713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.933843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.933854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.933927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.933938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.934005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.934015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.934076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.934087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.934166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.934175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.934242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.934253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.934320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.962 [2024-11-26 07:38:09.934330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.962 qpair failed and we were unable to recover it. 00:28:41.962 [2024-11-26 07:38:09.934462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.934474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.934545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.934556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.934625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.934635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.934699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.934710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.934773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.934783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.934848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.934859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.934925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.934936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.935036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.935048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.935117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.935127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.935186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.935196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.935353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.935364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.935423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.935433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.935499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.935510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.935583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.935595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.935667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.935677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.935750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.935762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.935830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.935840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.935970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.935981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.936041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.936052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.936120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.936132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.936209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.936221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.936290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.936301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.936436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.936447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.936577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.936589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.936665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.936676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.936752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.936763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.936892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.936903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.937033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.937046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.937131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.937142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.937203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.937213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.937273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.937283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.937373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.937383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.937449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.937460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.937524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.937535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.937665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.937678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.937740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.937751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.937826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.937837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.963 [2024-11-26 07:38:09.937908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.963 [2024-11-26 07:38:09.937919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.963 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.938956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.938972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.939039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.939053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.939130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.939145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.939333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.939348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.939420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.939434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.939509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.939523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.939591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.939605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.939677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.939692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.939772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.939787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.939857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.939871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.939955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.939976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.940055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.940069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.940140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.940154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.940309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.940323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.940457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.940472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.940541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.940554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.940630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.940640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.940699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.940709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.940771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.940782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.940924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.940935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.941011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.941022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.941162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.941173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.964 qpair failed and we were unable to recover it. 00:28:41.964 [2024-11-26 07:38:09.941243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.964 [2024-11-26 07:38:09.941253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.941313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.941324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.941393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.941402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.941488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.941499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.941626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.941637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.941720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.941731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.941788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.941798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.941941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.941957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.942029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.942039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.942180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.942191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.942256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.942266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.942345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.942356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.942455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.942465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.942531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.942541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.942606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.942617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.942692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.942703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.942834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.942845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.942909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.942919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.943010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.943022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.943096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.943107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.943271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.943281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.943349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.943359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.943423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.943433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.943513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.943523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.943649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.943660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.943726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.943736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.943799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.943809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.943884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.943895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.943972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.943986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.944062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.944073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.944133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.944143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.944275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.944285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.944362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.944372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.944527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.944537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.944609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.944619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.944680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.944690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.944746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.965 [2024-11-26 07:38:09.944756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.965 qpair failed and we were unable to recover it. 00:28:41.965 [2024-11-26 07:38:09.944817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.944827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.944903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.944914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.944981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.944991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.945132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.945142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.945205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.945216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.945376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.945387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.945467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.945478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.945614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.945625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.945760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.945771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.945851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.945861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.946059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.946072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.946143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.946154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.946222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.946233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.946300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.946310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.946370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.946381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.946454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.946467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.946536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.946547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.946619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.946631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.946724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.946753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.946856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.946871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.947008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.947024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.947116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.947131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.947211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.947226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.947295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.947309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.947396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.947411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.947480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.947495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.947564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.947579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.947720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.947735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.947803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.947819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.947894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.947908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.947993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.948010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.948099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.948120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.948280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.948295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.948416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.948432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.948523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.948539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.948610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.948625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.948697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.948712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.966 [2024-11-26 07:38:09.948789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.966 [2024-11-26 07:38:09.948805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.966 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.948885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.948901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.948990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.949006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.949098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.949113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.949266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.949281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.949422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.949437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.949638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.949653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.949789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.949804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.949904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.949919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.950005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.950021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.950100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.950114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.950261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.950276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.950351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.950365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.950448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.950462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.950539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.950554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.950628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.950642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.950744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.950759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.950827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.950841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.950912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.950927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.951013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.951028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.951119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.951134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.951238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.951257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.951347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.951362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.951486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.951501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.951578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.951593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.951668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.951683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.951841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.951856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.951967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.951982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.952130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.952145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.952244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.952259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.952331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.952345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.952426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.952440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.952532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.952547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.952634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.952649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.952718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.952733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.952809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.952824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.952905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.952920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.953072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.953087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.967 qpair failed and we were unable to recover it. 00:28:41.967 [2024-11-26 07:38:09.953157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.967 [2024-11-26 07:38:09.953171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.953315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.953329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.953404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.953418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.953554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.953568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.953717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.953732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.953833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.953848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.953939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.953958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.954047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.954062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.954198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.954213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.954369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.954384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.954466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.954481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.954550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.954564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.954653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.954668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.954742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.954757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.954891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.954906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.954992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.955008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.955079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.955093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.955193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.955207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.955413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.955429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.955510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.955524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.955592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.955607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.955677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.955692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.955769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.955784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.955857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.955875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.955961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.955976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.956050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.956065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.956165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.956180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.956320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.956336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.956407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.956422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.956575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.956591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.956671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.956686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.956760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.956775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.956859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.956874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.957035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.957051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.957127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.957143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.957284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.957299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.957380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.957395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.957478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.957492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.968 [2024-11-26 07:38:09.957577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.968 [2024-11-26 07:38:09.957592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.968 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.957737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.957751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.957819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.957834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.957923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.957937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.958019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.958034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.958103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.958117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.958189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.958204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.958273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.958287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.958356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.958371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.958461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.958476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.958546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.958560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.958629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.958644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.958797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.958811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.958875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.958889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.958967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.958983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.959159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.959174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.959243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.959258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.959337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.959352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.959441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.959456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.959531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.959548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.959630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.959641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.959777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.959788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.959961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.959972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.960039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.960050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.960131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.960142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.960201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.960214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.960291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.960302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.960365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.960375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.960510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.960520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.960584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.960594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.960682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.960692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.960768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.960778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.960975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.960986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.969 qpair failed and we were unable to recover it. 00:28:41.969 [2024-11-26 07:38:09.961044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.969 [2024-11-26 07:38:09.961056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.961115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.961125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.961209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.961219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.961359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.961371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.961449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.961459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.961551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.961561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.961698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.961707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.961772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.961782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.961846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.961856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.962061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.962072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.962153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.962163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.962228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.962238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.962307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.962318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.962376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.962387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.962464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.962474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.962602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.962614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.962672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.962683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.962844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.962854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.963028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.963039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.963106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.963117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.963247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.963257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.963336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.963347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.963409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.963420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.963549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.963560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.963633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.963644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.963726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.963736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.963793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.963803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.963855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.963865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.963932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.963942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.964020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.964030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.964089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.964100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.964165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.964176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.964247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.964260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.964331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.964341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.964422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.964433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.964576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.964587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.964666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.964677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.964747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.964757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.970 qpair failed and we were unable to recover it. 00:28:41.970 [2024-11-26 07:38:09.964819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.970 [2024-11-26 07:38:09.964829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.964958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.964970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.965955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.965967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.966039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.966049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.966118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.966128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.966213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.966224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.966293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.966304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.966369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.966379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.966459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.966470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.966596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.966606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.966685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.966695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.966753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.966764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.966833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.966843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.966907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.966918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.967000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.967010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.967073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.967083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.967165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.967176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.967244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.967255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.967319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.967330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.967407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.967417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.967500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.967511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.967574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.967584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.967710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.967721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.967860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.967874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.967943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.967959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.968020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.968031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.968106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.968117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.971 [2024-11-26 07:38:09.968178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.971 [2024-11-26 07:38:09.968188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.971 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.968259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.968270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.968330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.968341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.968481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.968492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.968574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.968585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.968655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.968666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.968862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.968873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.968958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.968969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.969052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.969063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.969202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.969213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.969289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.969300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.969431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.969441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.969514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.969525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.969594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.969606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.969746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.969757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.969815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.969826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.969889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.969899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.969975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.969987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.970062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.970073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.970143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.970153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.970231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.970241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.970314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.970325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.970396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.970407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.970475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.970487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.970552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.970563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.970624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.970635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.970766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.970776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.970903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.970914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.970984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.970995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.971069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.971079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.971211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.971222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.971348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.971359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.971428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.971439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.971565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.971576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.971660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.971671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.971733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.971744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.971881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.971894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.972057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.972068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.972 qpair failed and we were unable to recover it. 00:28:41.972 [2024-11-26 07:38:09.972130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.972 [2024-11-26 07:38:09.972141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.972212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.972222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.972360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.972370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.972448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.972459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.972529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.972540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.972611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.972622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.972692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.972702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.972846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.972857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.972925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.972936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.973982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.973993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.974090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.974101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.974227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.974238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.974306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.974317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.974378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.974389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.974460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.974471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.974536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.974547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.974608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.974618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.974680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.974690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.974764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.974775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.974840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.974850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.973 qpair failed and we were unable to recover it. 00:28:41.973 [2024-11-26 07:38:09.974911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.973 [2024-11-26 07:38:09.974922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.975011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.975023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.975086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.975096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.975173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.975185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.975331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.975341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.975471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.975481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.975614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.975625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.975697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.975710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.975797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.975808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.975876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.975887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.975958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.975969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.976970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.976981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.977151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.977162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.977287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.977298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.977369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.977380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.977451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.977462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.977520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.977531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.977606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.977616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.977762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.977773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.977839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.977850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.977979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.977990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.978070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.978081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.978139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.978150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.974 [2024-11-26 07:38:09.978220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.974 [2024-11-26 07:38:09.978231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.974 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.978290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.978301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.978363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.978374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.978458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.978469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.978599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.978610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.978681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.978691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.978816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.978827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.978886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.978897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.978974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.978985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.979061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.979139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.979221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.979291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.979378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.979446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.979603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.979679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.979776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.979862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.979936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.979999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.980074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.980145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.980299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.980374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.980464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.980538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.980678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.980765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.980843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.980911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.980985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.980997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.981074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.981085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.981142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.975 [2024-11-26 07:38:09.981153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.975 qpair failed and we were unable to recover it. 00:28:41.975 [2024-11-26 07:38:09.981213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.981223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.981291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.981301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.981374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.981385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.981457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.981467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.981544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.981555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.981608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.981619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.981806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.981837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.981928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.981944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.982036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.982051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.982132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.982147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.982222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.982237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.982317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.982331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.982413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.982428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.982503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.982517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.982594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.982608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.982688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.982704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.982784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.982799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.982868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.982882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.982966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.982982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.983051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.983065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.983210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.983225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.983320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.983335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.983411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.983425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.983560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.983574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.983645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.983660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.983728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.983742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.983880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.983895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.983977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.983993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.984144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.984158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.984231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.984246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.984318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.984333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.984578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.984593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.984686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.984701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.984778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.976 [2024-11-26 07:38:09.984791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.976 qpair failed and we were unable to recover it. 00:28:41.976 [2024-11-26 07:38:09.984869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.977 [2024-11-26 07:38:09.984880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.977 qpair failed and we were unable to recover it. 00:28:41.977 [2024-11-26 07:38:09.984936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.977 [2024-11-26 07:38:09.984952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.977 qpair failed and we were unable to recover it. 00:28:41.977 [2024-11-26 07:38:09.985024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.977 [2024-11-26 07:38:09.985035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.977 qpair failed and we were unable to recover it. 00:28:41.977 [2024-11-26 07:38:09.985096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.977 [2024-11-26 07:38:09.985107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.977 qpair failed and we were unable to recover it. 00:28:41.977 [2024-11-26 07:38:09.985174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.977 [2024-11-26 07:38:09.985185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.977 qpair failed and we were unable to recover it. 00:28:41.977 [2024-11-26 07:38:09.985261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.977 [2024-11-26 07:38:09.985272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.977 qpair failed and we were unable to recover it. 00:28:41.977 [2024-11-26 07:38:09.985400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.977 [2024-11-26 07:38:09.985410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:41.977 qpair failed and we were unable to recover it. 00:28:42.250 [2024-11-26 07:38:09.985585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.250 [2024-11-26 07:38:09.985596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.250 qpair failed and we were unable to recover it. 00:28:42.250 [2024-11-26 07:38:09.985675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.250 [2024-11-26 07:38:09.985686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.250 qpair failed and we were unable to recover it. 00:28:42.250 [2024-11-26 07:38:09.985748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.250 [2024-11-26 07:38:09.985758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.250 qpair failed and we were unable to recover it. 00:28:42.250 [2024-11-26 07:38:09.985825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.250 [2024-11-26 07:38:09.985836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.250 qpair failed and we were unable to recover it. 00:28:42.250 [2024-11-26 07:38:09.985904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.250 [2024-11-26 07:38:09.985915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.250 qpair failed and we were unable to recover it. 00:28:42.250 [2024-11-26 07:38:09.986004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.250 [2024-11-26 07:38:09.986018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.250 qpair failed and we were unable to recover it. 00:28:42.250 [2024-11-26 07:38:09.986089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.250 [2024-11-26 07:38:09.986100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.250 qpair failed and we were unable to recover it. 00:28:42.250 [2024-11-26 07:38:09.986180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.250 [2024-11-26 07:38:09.986190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.250 qpair failed and we were unable to recover it. 00:28:42.250 [2024-11-26 07:38:09.986250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.986260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.986335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.986346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.986416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.986426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.986494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.986504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.986568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.986578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.986638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.986649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.986706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.986718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.986864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.986874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.986934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.986945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.987972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.987983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.988969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.988980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.989058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.989068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.989211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.989221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.989291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.989302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.989389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.251 [2024-11-26 07:38:09.989399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.251 qpair failed and we were unable to recover it. 00:28:42.251 [2024-11-26 07:38:09.989462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.989473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.989542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.989553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.989615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.989625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.989706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.989716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.989776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.989786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.989864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.989875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.989939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.989954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.990981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.990992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.991050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.991062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.991125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.991135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.991203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.991214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.991277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.991288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.991358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.991368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.991495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.991506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.991576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.991587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.991714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.991726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.991853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.991864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.991922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.991933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.991995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.992006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.992145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.992156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.992224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.992235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.992298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.992308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.992373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.992383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.992533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.992544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.992617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.992627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.992692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.252 [2024-11-26 07:38:09.992703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.252 qpair failed and we were unable to recover it. 00:28:42.252 [2024-11-26 07:38:09.992855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.992865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.992940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.992977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.993037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.993116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.993212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.993291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.993371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.993441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.993517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.993609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.993711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:42.253 [2024-11-26 07:38:09.993802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.993884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.993979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.993994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.994066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.994082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24ba0 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:42.253 [2024-11-26 07:38:09.994170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.994193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.994285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.994300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.994374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.994389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:42.253 [2024-11-26 07:38:09.994528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.994543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.994610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.994625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 07:38:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.253 [2024-11-26 07:38:09.994765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.994780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.994919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.994934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.995017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.995041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76cc000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.995122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.995134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.995202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.995212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.995276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.995286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.995349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.995359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.995430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.995444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.995505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.995517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.995763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.995774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.995927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.995937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.996017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.996028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.996110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.996121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.996192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.253 [2024-11-26 07:38:09.996202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.253 qpair failed and we were unable to recover it. 00:28:42.253 [2024-11-26 07:38:09.996274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.996285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.996362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.996373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.996443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.996453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.996515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.996526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.996591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.996602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.996676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.996686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.996745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.996755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.996828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.996840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.996993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.997004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.997070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.997081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.997155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.997165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.997294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.997305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.997387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.997398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.997465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.997475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.997544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.997555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.997697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.997708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.997768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.997779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.997854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.997865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.997937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.997952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.998022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.998033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.998162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.998173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.998249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.998261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.998408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.998419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.998488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.998500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.998627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.998638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.998837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.998849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.998910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.998920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.998997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.999008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.999085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.999096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.999163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.999174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.999237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.999247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.999320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.999330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.999407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.999418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.999476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.999488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.999554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.999564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.999623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.254 [2024-11-26 07:38:09.999633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.254 qpair failed and we were unable to recover it. 00:28:42.254 [2024-11-26 07:38:09.999704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:09.999714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:09.999778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:09.999789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:09.999868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:09.999879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:09.999959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:09.999970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.000048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.000179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.000258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.000335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.000422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.000504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.000574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.000648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.000738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.000821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.000918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.000992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.001987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.001999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.002072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.002083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.002159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.002171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.002233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.002243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.002331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.002341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.255 qpair failed and we were unable to recover it. 00:28:42.255 [2024-11-26 07:38:10.002478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.255 [2024-11-26 07:38:10.002489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.002562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.002573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.002645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.002657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.002723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.002733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.002810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.002822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.002917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.002935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.003026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.003042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.003119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.003135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.003236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.003249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.003330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.003344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.003426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.003440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.003527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.003544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.003639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.003653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.003751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.003764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.003830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.003842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.003937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.003986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.004053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.004065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.004168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.004179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.004258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.004270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.004368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.004388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.004500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.004514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.004628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.004647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.004762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.004779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.004887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.004905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.005084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.005100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.005204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.005234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.005319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.005344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.005476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.005500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.005621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.005641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.005734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.005754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.005898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.005919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.006001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.006017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.006100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.256 [2024-11-26 07:38:10.006114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.256 qpair failed and we were unable to recover it. 00:28:42.256 [2024-11-26 07:38:10.006191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.006202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.006284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.006295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.006370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.006382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.006453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.006464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.006546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.006557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.006632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.006644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.006723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.006734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.006815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.006826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.006909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.006924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.007013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.007026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.007103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.007114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.007181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.007193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.007275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.007290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.007367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.007378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.007450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.007460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.007589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.007600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.007665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.007676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.007805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.007816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.007888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.007900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.007966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.007977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.008051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.008062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.008132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.008143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.008213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.008224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.008316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.008328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.008397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.008408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.008488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.257 [2024-11-26 07:38:10.008499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.257 qpair failed and we were unable to recover it. 00:28:42.257 [2024-11-26 07:38:10.008569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.008580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.008722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.008733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.008819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.008830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.008902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.008912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.008997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.009082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.009167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.009251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.009332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.009423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.009501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.009597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.009680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.009762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.009844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.009915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.009926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.010025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.010041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.010117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.010132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.010285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.010300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.010428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.010439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.010502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.010514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.010579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.010590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.010664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.010674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.010754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.010766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.010832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.010844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.010913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.010924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.010991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.011006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.011080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.011091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.011159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.011170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.011231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.011241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.011306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.011317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.011443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.011454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.011515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.011525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.011591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.011602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.011679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.011690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.011753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.011764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.258 [2024-11-26 07:38:10.011836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.258 [2024-11-26 07:38:10.011848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.258 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.011913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.011924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.012922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.012934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.013004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.013016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.013085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.013096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.013178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.013189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.013260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.013272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.013336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.013348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.013485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.013496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.013558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.013569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.013726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.013737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.013815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.013826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.013904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.013915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.013988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.014064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.014164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.014240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.014334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.014411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.014488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.014571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.014646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.014731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.014883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.014964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.014975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.015099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.015110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.015178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.015189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.259 [2024-11-26 07:38:10.015250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.259 [2024-11-26 07:38:10.015260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.259 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.015340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.015351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.015432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.015444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.015510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.015521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.015580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.015590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.015661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.015672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.015749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.015760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.015823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.015834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.015905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.015917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.015993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.016921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.016991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.017003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.017071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.017082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.017215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.017226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.017308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.017319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.017408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.017419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.017480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.017492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.017568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.017578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.017641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.017652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.017727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.017739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.017798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.017809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.017869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.260 [2024-11-26 07:38:10.017880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.260 qpair failed and we were unable to recover it. 00:28:42.260 [2024-11-26 07:38:10.017940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.017960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.018978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.018990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.019960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.019972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.020889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.261 [2024-11-26 07:38:10.020900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.261 qpair failed and we were unable to recover it. 00:28:42.261 [2024-11-26 07:38:10.021037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.021049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.021116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.021128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.021202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.021213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.021292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.021306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.021364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.021375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.021439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.021450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.021520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.021531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.021601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.021612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.021785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.021797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.021883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.021894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.021966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.021977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.022054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.022065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.022127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.022137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.022207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.022217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.022353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.022364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.022435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.022446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.022512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.022523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.022621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.022632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.022715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.022725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.022809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.022821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.022899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.022911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.022988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.023000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.023061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.023072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.023134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.023146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.023204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.023215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.023363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.023375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.023433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.023444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.023580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.023591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.023723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.023734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.023800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.023811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.023904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.023934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.024031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.024047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.024135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.024151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.024230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.024245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.024337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.024352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.024428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.024442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.024529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.024544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.024623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.262 [2024-11-26 07:38:10.024639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.262 qpair failed and we were unable to recover it. 00:28:42.262 [2024-11-26 07:38:10.024706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.024721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.025232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.025257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.025480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.025496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.025663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.025679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.025767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.025782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.025987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.026017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.026107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.026122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.026209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.026226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.026320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.026335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.026476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.026492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.026587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.026604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.027048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.027071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.027170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.027186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.027271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.027285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.027476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.027493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.027672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.027688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.027834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.027849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.027995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.028008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.028073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.028083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.028229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.028240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.028317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.028328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.028414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.028426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.028525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.028537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.028625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.028636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.028717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.028728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.028813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.028825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.028907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.028918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.029040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.029058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.029161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.029187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.029293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.029320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.029442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.029465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.029542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.029557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.029666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.029681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.029762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.029779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.029956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.263 [2024-11-26 07:38:10.029974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.263 qpair failed and we were unable to recover it. 00:28:42.263 [2024-11-26 07:38:10.030084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.030102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.030207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.030228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.030328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.030365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.264 [2024-11-26 07:38:10.030486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.030518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.030650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.030674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.030862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.030915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.031083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.031098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.031199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.031211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.031303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.031316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.264 [2024-11-26 07:38:10.031409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.031436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.031521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.031533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.031618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.031629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.031700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.031713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.031783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.264 [2024-11-26 07:38:10.031795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.031878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.031890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.031970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.031981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.032060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.032071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.032154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.032166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.032227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.032238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.032328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.032339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.032427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.032439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.032515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.032526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.032604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.032615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.032760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.032771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.032857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.032868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.032973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.032985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.033135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.033147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.033813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.033835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.033927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.264 [2024-11-26 07:38:10.033939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.264 qpair failed and we were unable to recover it. 00:28:42.264 [2024-11-26 07:38:10.034083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.034095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.034232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.034244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.034380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.034392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.034459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.034481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.034627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.034638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.034695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.034705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.034839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.034853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.034932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.034942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.035033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.035044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.035115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.035126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.035237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.035248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.035336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.035348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.035423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.035434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.035507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.035518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.035580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.035591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.035676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.035687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.035819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.035830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.035892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.035904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.035989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.036000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.036061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.036072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.036141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.036152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.036235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.036246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.036332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.036343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.036405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.036416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.036573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.036584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.036646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.036658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.036729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.036740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.036826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.036837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.036968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.036980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.037043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.037053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.037118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.265 [2024-11-26 07:38:10.037129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.265 qpair failed and we were unable to recover it. 00:28:42.265 [2024-11-26 07:38:10.037190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.037201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.037280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.037291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.037362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.037372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.037430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.037441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.037511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.037522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.037582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.037593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.037672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.037684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.037757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.037768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.037840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.037851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.037916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.037928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.037998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.038010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.038076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.038087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.038160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.038171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.038309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.038320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.038378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.038389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.038469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.038483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.038548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.038558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.038621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.038631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.038769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.038780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.038847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.038858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.039039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.039051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.039132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.039143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.039222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.039233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.039384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.039396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.039477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.039489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.039552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.039563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.039628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.039638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.039701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.039712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.039781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.039792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.039925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.039936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.040019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.040031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.040091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.040102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.040166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.040177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.040243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.040255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.040315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.040326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.040416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.266 [2024-11-26 07:38:10.040426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.266 qpair failed and we were unable to recover it. 00:28:42.266 [2024-11-26 07:38:10.040495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.040508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.040653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.040664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.040731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.040742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.040821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.040831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.040895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.040905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.041005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.041017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.041087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.041098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.041226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.041236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.041303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.041315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.041388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.041398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.041467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.041477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.041547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.041558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.041634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.041645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.041703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.041713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.041840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.041851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.041924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.041934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.042984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.042995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.043130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.043142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.043207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.043217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.043281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.043291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.043349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.043360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.043437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.043454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.267 [2024-11-26 07:38:10.043527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.267 [2024-11-26 07:38:10.043537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.267 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.043602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.043613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.043760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.043771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.043897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.043908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.043987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.043999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.044133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.044144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.044214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.044225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.044295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.044306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.044367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.044379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.044443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.044453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.044516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.044526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.044584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.044595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.044658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.044669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.044819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.044830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.044898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.044910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.044979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.044991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.045978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.045989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.046052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.046063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.046137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.046148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.046220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.046232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.046291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.046301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.046362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.046373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.046454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.046465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.046607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.046618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.046688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.046700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.046834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.046845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.046905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.046916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.268 [2024-11-26 07:38:10.046984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.268 [2024-11-26 07:38:10.046996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.268 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.047128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.047138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.047206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.047217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.047286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.047296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.047447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.047458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.047536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.047547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.047616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.047626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.047690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.047700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.047770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.047782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.047861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.047871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.048006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.048018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.048096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.048106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.048176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.048187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.048267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.048279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.048349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.048360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.048514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.048526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.048589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.048599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.048773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.048784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.048844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.048854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.048916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.048926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.049035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.049049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.049180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.049190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.049267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.049278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.049347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.049357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.049434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.049445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.049509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.049519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.049643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.049655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.049738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.269 [2024-11-26 07:38:10.049749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.269 qpair failed and we were unable to recover it. 00:28:42.269 [2024-11-26 07:38:10.049810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.049824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.049903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.049913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.049988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.050076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.050153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.050291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.050362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.050442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.050582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.050651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.050726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.050808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.050883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.050969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.050981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.051986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.051997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.052064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.052075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.052228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.052240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.052305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.052315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.052403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.052414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.052491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.052501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.052628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.052640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.052714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.052725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.052799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.052809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.052891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.052903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.052974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.052986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.053063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.270 [2024-11-26 07:38:10.053074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.270 qpair failed and we were unable to recover it. 00:28:42.270 [2024-11-26 07:38:10.053141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.053153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.053283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.053293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.053359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.053369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.053503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.053516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.053581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.053592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.053723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.053735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.053940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.053959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.054954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.054966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.055029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.055039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.055105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.055116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.055196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.055208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.055275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.055286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.055351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.055361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.055434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.055445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.055523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.055534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.055666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.055677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.055743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.055755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.055814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.055825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.055894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.055905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.056039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.056051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.056113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.056123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.056199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.056209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.056274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.056286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.056344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.056355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.056483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.056494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.056571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.056582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.271 [2024-11-26 07:38:10.056643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.271 [2024-11-26 07:38:10.056653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.271 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.056732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.056743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.056834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.056845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.056907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.056917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.057051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.057063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.057129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.057140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.057199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.057212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.057288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.057300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.057368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.057379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.057456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.057467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.057598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.057610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.057682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.057693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.057755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.057766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.057904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.057915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.057992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.058003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.058141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.058153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.058242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.058253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.058314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.058325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.058396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.058407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.058470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.058481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.058543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.058554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.058692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.058704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.058777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.058788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.058849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.058861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.058925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.058936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.059006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.059017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.059076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.059088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.059145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.059156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.059214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.059225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.059292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.059302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.059368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.059379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.059511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.059522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.059603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.272 [2024-11-26 07:38:10.059614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.272 qpair failed and we were unable to recover it. 00:28:42.272 [2024-11-26 07:38:10.059694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.059719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.059791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.059806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.059881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.059896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.059967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.059983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.060057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.060073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.060144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.060159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.060236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.060250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.060343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.060359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.060425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.060440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.060513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.060528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.060607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.060622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.060768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.060783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.060860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.060875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.060943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.060968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.061053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.061067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.061145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.061160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.061232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.061248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.061328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.061344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.061419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.061435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.061512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.061528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.061612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.061628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c0000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.061696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.061712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.061772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.061783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.061871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.061882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.061952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.061964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.062041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.062114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.062180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.062254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.062333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.062420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.062493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.062567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.062649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.062724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.273 [2024-11-26 07:38:10.062878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.273 qpair failed and we were unable to recover it. 00:28:42.273 [2024-11-26 07:38:10.062957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.062968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.063026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.063037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.063103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.063113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.063243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.063255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.063328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.063338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.063486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.063498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.063561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.063572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.063655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.063667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.063725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.063737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.063867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.063877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.063963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.063976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.064927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.064938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.065082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.065094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.065156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.065166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.065317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.065329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.065398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.065408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.065482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.065493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.065561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.065572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.065650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.065660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.065718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.065729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.065805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.065816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.274 [2024-11-26 07:38:10.065874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.274 [2024-11-26 07:38:10.065885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.274 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.065964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.065980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 Malloc0 00:28:42.275 [2024-11-26 07:38:10.066581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.066961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.066973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.067046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.067057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.067116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.067126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.067184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.067195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.275 [2024-11-26 07:38:10.067326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.067337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.067401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.067412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.067554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.067566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:42.275 [2024-11-26 07:38:10.067699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.067711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.067786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.067797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.067853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.067863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.067930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.067943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.275 [2024-11-26 07:38:10.068010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.068021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.068100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.068111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.068182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.068193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.275 [2024-11-26 07:38:10.068326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.068337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.068391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.068402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.068454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.068464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.068525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.068536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.068610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.068622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.068678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.068688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.068754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.068764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.068835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.275 [2024-11-26 07:38:10.068847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.275 qpair failed and we were unable to recover it. 00:28:42.275 [2024-11-26 07:38:10.068921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.068932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.068991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.069002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.069135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.069146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.069229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.069240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.069309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.069320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.069386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.069396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.069468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.069479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.069642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.069654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.069719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.069730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.069805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.069816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.069897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.069908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.069979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.069990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.070045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.070056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.070123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.070134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.070210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.070223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.070300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.070311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.070436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.070447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.070580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.070590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.070669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.070680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.070760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.070771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.070832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.070843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.070917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.070928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.071034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.071044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.071119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.071129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.071209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.071219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.071288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.071298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.071436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.071447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.071521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.071532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.071599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.071611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.071689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.071699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.071782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.071794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.071880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.071892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.071994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.072006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.072070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.072081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.072192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.072203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.072260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.072271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.072339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.072351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.072445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.276 [2024-11-26 07:38:10.072457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.276 qpair failed and we were unable to recover it. 00:28:42.276 [2024-11-26 07:38:10.072598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.072609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.072683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.072695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.072766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.072778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.072888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.072900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.073001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.073014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.073089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.073101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.073190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.073203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.073273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.073284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.073428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.073440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.073527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.073538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.073604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.073616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.073695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.073707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.073787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.073798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.073875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.073886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.073980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.073992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.074063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.074076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.074152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.074166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.074276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.074287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.074291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.277 [2024-11-26 07:38:10.074380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.074391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.074475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.074487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.074571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.074582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.074665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.074676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.074758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.074771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.074856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.074955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.075088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.075121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.075290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.075331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.075492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.075514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.075590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.075602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.075679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.075690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.075763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.075777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.075846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.075858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.075929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.075940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.076022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.076034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.076102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.277 [2024-11-26 07:38:10.076113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.277 qpair failed and we were unable to recover it. 00:28:42.277 [2024-11-26 07:38:10.076173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.076184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.076259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.076270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.076338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.076350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.076408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.076419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.076492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.076504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.076570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.076581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.076650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.076661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.076732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.076743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.076825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.076837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.076911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.076922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.076998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.077958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.077968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.078059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.078071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.078132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.078143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.078196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.078206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.078266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.078276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.078335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.078346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.078415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.078425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.278 qpair failed and we were unable to recover it. 00:28:42.278 [2024-11-26 07:38:10.078488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.278 [2024-11-26 07:38:10.078498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.078568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.078579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.078637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.078647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.078711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.078721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.078789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.078799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.078885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.078896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.079982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.079993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.080073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.080084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.080154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.080164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.080292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.080302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.080370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.080381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.080454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.080465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.080530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.080541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.080615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.080626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.080694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.080705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.080830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.080842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.080898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.080908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.080970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.080981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.081032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.081042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.081099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.081111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.081171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.279 [2024-11-26 07:38:10.081181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.279 qpair failed and we were unable to recover it. 00:28:42.279 [2024-11-26 07:38:10.081264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.081275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.081335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.081345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.081403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.081413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.081485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.081497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.081553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.081563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.081703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.081713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.081841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.081852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.081913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.081924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.081992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.082003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.082079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.082090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.082242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.082253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.082309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.082320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.082383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.082394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.082450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.082460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.082551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.082564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.082638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.082648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.082782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.082793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.082860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.082871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.280 [2024-11-26 07:38:10.083099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.083111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.083173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.083184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.083245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.083255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.083331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.083342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.280 [2024-11-26 07:38:10.083420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.083431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.083508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.083519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.083586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.083597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.280 [2024-11-26 07:38:10.083736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.083748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.083823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.083836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.083894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.083905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.083971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.083982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.280 [2024-11-26 07:38:10.084061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.084072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.084191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.084201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.084279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.084289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.280 qpair failed and we were unable to recover it. 00:28:42.280 [2024-11-26 07:38:10.084348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.280 [2024-11-26 07:38:10.084359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.084430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.084442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.084506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.084517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.084585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.084596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.084666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.084676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.084734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.084744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.084874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.084885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.084954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.084967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.085026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.085036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.085167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.085179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.085242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.085253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.085325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.085335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.085487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.085499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.085563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.085573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.085648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.085658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.085748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.085760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.085820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.085831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.085890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.085901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.085985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.085996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.086954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.086965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.087032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.087045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.281 [2024-11-26 07:38:10.087100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.281 [2024-11-26 07:38:10.087110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.281 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.087177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.087188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.087322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.087333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.087397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.087409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.087471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.087481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.087555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.087566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.087622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.087633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.087773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.087784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.087911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.087922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.088062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.088073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.088202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.088213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.088274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.088285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.088356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.088366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.088491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.088502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.088562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.088574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.088649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.088664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.088797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.088807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.088877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.088889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.089040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.089051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.089119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.089129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.089218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.089229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.089358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.089368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.089433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.089444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.089500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.089510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.089571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.089582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.089725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.089736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.089817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.089828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.089901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.089911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.090039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.090051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.090119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.090129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.282 [2024-11-26 07:38:10.090210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.282 [2024-11-26 07:38:10.090221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.282 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.090346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.090356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.090489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.090500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.090648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.090658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.090721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.090732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.090785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.090795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.090864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.090875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.283 [2024-11-26 07:38:10.090955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.090967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.091027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.091101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.091189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.283 [2024-11-26 07:38:10.091331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.091410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.091492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.091576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.283 [2024-11-26 07:38:10.091654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.091724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.091817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.091885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.283 [2024-11-26 07:38:10.091968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.091980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.092116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.092127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.092206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.092216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.092287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.092297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.092370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.092382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.092444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.092455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.092542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.092553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.092632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.092643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.092773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.092784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.092905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.283 [2024-11-26 07:38:10.092916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.283 qpair failed and we were unable to recover it. 00:28:42.283 [2024-11-26 07:38:10.092981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.092992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.093181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.093192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.093262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.093273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.093425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.093435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.093571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.093582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.093716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.093727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.093919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.093930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.094003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.094014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.094089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.094100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.094247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.094257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.094341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.094352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.094421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.094432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.094513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.094523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.094586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.094596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.094653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.094663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.094739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.094750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.094821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.094833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.094908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.094918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.095044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.095055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.095182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.095193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.095266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.095277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.095359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.095370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.095440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.095450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.095579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.095589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.095739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.095749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.095812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.095822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.095892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.095903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.095974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.095986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.284 [2024-11-26 07:38:10.096058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.284 [2024-11-26 07:38:10.096068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.284 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.096144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.096155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.096227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.096237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.096374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.096385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.096450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.096460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.096588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.096599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.096792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.096805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.096865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.096876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.096956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.096967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.097039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.097051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.097202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.097212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.097297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.097317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.097393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.097404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.097474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.097484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.097558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.097568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.097637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.097647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.097778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.097789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.097931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.097942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.098024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.098035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.098099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.098109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.098180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.098190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.098267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.098277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.098367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.098378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.098447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.098457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.098521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.098533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.098664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.098675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.098887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.098898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.098961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.285 [2024-11-26 07:38:10.098971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.099051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.099061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.099140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.099151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.099282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.099294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.285 [2024-11-26 07:38:10.099475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.099486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 [2024-11-26 07:38:10.099632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.285 [2024-11-26 07:38:10.099643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.285 qpair failed and we were unable to recover it. 00:28:42.285 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.285 [2024-11-26 07:38:10.099712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.099722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.099784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.099795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.099939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.099954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.286 [2024-11-26 07:38:10.100091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.100103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.100233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.100243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.100332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.100342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.100413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.100424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.100515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.100526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.100593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.100603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.100677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.100687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.100765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.100775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.100849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.100860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.100993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.101004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.101141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.101152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.101223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.101233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.101303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.101313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.101374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.101384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.101510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.101522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.101758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.101768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.101904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.101914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.102060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.102072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.102202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.102213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.102283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.286 [2024-11-26 07:38:10.102295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f76c4000b90 with addr=10.0.0.2, port=4420 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.102422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.286 [2024-11-26 07:38:10.104900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.286 [2024-11-26 07:38:10.104990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.286 [2024-11-26 07:38:10.105009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.286 [2024-11-26 07:38:10.105019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.286 [2024-11-26 07:38:10.105026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.286 [2024-11-26 07:38:10.105046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.286 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:42.286 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.286 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.286 [2024-11-26 07:38:10.114898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.286 [2024-11-26 07:38:10.114988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.286 [2024-11-26 07:38:10.115004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.286 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.286 [2024-11-26 07:38:10.115011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.286 [2024-11-26 07:38:10.115023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.286 [2024-11-26 07:38:10.115040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 07:38:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 895804 00:28:42.286 [2024-11-26 07:38:10.124924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.286 [2024-11-26 07:38:10.125004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.286 [2024-11-26 07:38:10.125018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.286 [2024-11-26 07:38:10.125024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.286 [2024-11-26 07:38:10.125030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.286 [2024-11-26 07:38:10.125045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.286 qpair failed and we were unable to recover it. 00:28:42.286 [2024-11-26 07:38:10.134929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.286 [2024-11-26 07:38:10.135001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.286 [2024-11-26 07:38:10.135015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.135021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.135027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.135042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.144865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.287 [2024-11-26 07:38:10.144927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.287 [2024-11-26 07:38:10.144942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.144953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.144959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.144974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.154889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.287 [2024-11-26 07:38:10.154982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.287 [2024-11-26 07:38:10.154996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.155002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.155008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.155022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.164916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.287 [2024-11-26 07:38:10.164979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.287 [2024-11-26 07:38:10.164994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.165000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.165006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.165021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.174940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.287 [2024-11-26 07:38:10.175007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.287 [2024-11-26 07:38:10.175021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.175028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.175033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.175048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.184931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.287 [2024-11-26 07:38:10.185014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.287 [2024-11-26 07:38:10.185030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.185037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.185043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.185057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.195021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.287 [2024-11-26 07:38:10.195074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.287 [2024-11-26 07:38:10.195087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.195093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.195099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.195113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.205066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.287 [2024-11-26 07:38:10.205132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.287 [2024-11-26 07:38:10.205145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.205151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.205157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.205171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.215010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.287 [2024-11-26 07:38:10.215094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.287 [2024-11-26 07:38:10.215107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.215113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.215119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.215134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.225042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.287 [2024-11-26 07:38:10.225097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.287 [2024-11-26 07:38:10.225110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.225117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.225125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.225140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.235052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.287 [2024-11-26 07:38:10.235109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.287 [2024-11-26 07:38:10.235123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.235129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.235135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.235150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.245146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.287 [2024-11-26 07:38:10.245198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.287 [2024-11-26 07:38:10.245211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.287 [2024-11-26 07:38:10.245217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.287 [2024-11-26 07:38:10.245223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.287 [2024-11-26 07:38:10.245238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.287 qpair failed and we were unable to recover it. 00:28:42.287 [2024-11-26 07:38:10.255184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.288 [2024-11-26 07:38:10.255272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.288 [2024-11-26 07:38:10.255286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.288 [2024-11-26 07:38:10.255292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.288 [2024-11-26 07:38:10.255298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.288 [2024-11-26 07:38:10.255313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.288 qpair failed and we were unable to recover it. 00:28:42.288 [2024-11-26 07:38:10.265160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.288 [2024-11-26 07:38:10.265223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.288 [2024-11-26 07:38:10.265236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.288 [2024-11-26 07:38:10.265243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.288 [2024-11-26 07:38:10.265248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.288 [2024-11-26 07:38:10.265263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.288 qpair failed and we were unable to recover it. 00:28:42.288 [2024-11-26 07:38:10.275234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.288 [2024-11-26 07:38:10.275311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.288 [2024-11-26 07:38:10.275325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.288 [2024-11-26 07:38:10.275331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.288 [2024-11-26 07:38:10.275337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.288 [2024-11-26 07:38:10.275352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.288 qpair failed and we were unable to recover it. 00:28:42.288 [2024-11-26 07:38:10.285255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.288 [2024-11-26 07:38:10.285310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.288 [2024-11-26 07:38:10.285323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.288 [2024-11-26 07:38:10.285330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.288 [2024-11-26 07:38:10.285336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.288 [2024-11-26 07:38:10.285350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.288 qpair failed and we were unable to recover it. 00:28:42.288 [2024-11-26 07:38:10.295332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.288 [2024-11-26 07:38:10.295409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.288 [2024-11-26 07:38:10.295423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.288 [2024-11-26 07:38:10.295429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.288 [2024-11-26 07:38:10.295435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.288 [2024-11-26 07:38:10.295450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.288 qpair failed and we were unable to recover it. 00:28:42.288 [2024-11-26 07:38:10.305296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.288 [2024-11-26 07:38:10.305381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.288 [2024-11-26 07:38:10.305394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.288 [2024-11-26 07:38:10.305400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.288 [2024-11-26 07:38:10.305407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.288 [2024-11-26 07:38:10.305421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.288 qpair failed and we were unable to recover it. 00:28:42.288 [2024-11-26 07:38:10.315380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.288 [2024-11-26 07:38:10.315441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.288 [2024-11-26 07:38:10.315455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.288 [2024-11-26 07:38:10.315461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.288 [2024-11-26 07:38:10.315467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.288 [2024-11-26 07:38:10.315482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.288 qpair failed and we were unable to recover it. 00:28:42.288 [2024-11-26 07:38:10.325345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.288 [2024-11-26 07:38:10.325405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.288 [2024-11-26 07:38:10.325417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.288 [2024-11-26 07:38:10.325424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.288 [2024-11-26 07:38:10.325430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.288 [2024-11-26 07:38:10.325444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.288 qpair failed and we were unable to recover it. 00:28:42.547 [2024-11-26 07:38:10.335412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.547 [2024-11-26 07:38:10.335488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.547 [2024-11-26 07:38:10.335510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.547 [2024-11-26 07:38:10.335517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.547 [2024-11-26 07:38:10.335523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.547 [2024-11-26 07:38:10.335543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.547 qpair failed and we were unable to recover it. 00:28:42.547 [2024-11-26 07:38:10.345368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.547 [2024-11-26 07:38:10.345425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.547 [2024-11-26 07:38:10.345439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.547 [2024-11-26 07:38:10.345446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.547 [2024-11-26 07:38:10.345452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.547 [2024-11-26 07:38:10.345467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.547 qpair failed and we were unable to recover it. 00:28:42.547 [2024-11-26 07:38:10.355470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.547 [2024-11-26 07:38:10.355538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.547 [2024-11-26 07:38:10.355552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.547 [2024-11-26 07:38:10.355562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.547 [2024-11-26 07:38:10.355568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.547 [2024-11-26 07:38:10.355583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.547 qpair failed and we were unable to recover it. 00:28:42.547 [2024-11-26 07:38:10.365531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.547 [2024-11-26 07:38:10.365613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.547 [2024-11-26 07:38:10.365628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.547 [2024-11-26 07:38:10.365634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.547 [2024-11-26 07:38:10.365640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.547 [2024-11-26 07:38:10.365655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.547 qpair failed and we were unable to recover it. 00:28:42.547 [2024-11-26 07:38:10.375521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.547 [2024-11-26 07:38:10.375585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.547 [2024-11-26 07:38:10.375600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.547 [2024-11-26 07:38:10.375608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.547 [2024-11-26 07:38:10.375613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.547 [2024-11-26 07:38:10.375629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.547 qpair failed and we were unable to recover it. 00:28:42.547 [2024-11-26 07:38:10.385579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.547 [2024-11-26 07:38:10.385667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.547 [2024-11-26 07:38:10.385681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.547 [2024-11-26 07:38:10.385688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.547 [2024-11-26 07:38:10.385694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.547 [2024-11-26 07:38:10.385709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.547 qpair failed and we were unable to recover it. 00:28:42.547 [2024-11-26 07:38:10.395578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.547 [2024-11-26 07:38:10.395645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.547 [2024-11-26 07:38:10.395659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.547 [2024-11-26 07:38:10.395666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.547 [2024-11-26 07:38:10.395672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.547 [2024-11-26 07:38:10.395691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.547 qpair failed and we were unable to recover it. 00:28:42.547 [2024-11-26 07:38:10.405610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.547 [2024-11-26 07:38:10.405663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.547 [2024-11-26 07:38:10.405678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.547 [2024-11-26 07:38:10.405685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.547 [2024-11-26 07:38:10.405691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.547 [2024-11-26 07:38:10.405706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.415694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.415749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.415762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.415769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.415775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.415789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.425691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.425748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.425762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.425768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.425774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.425789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.435707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.435760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.435774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.435780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.435786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.435800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.445791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.445858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.445871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.445877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.445883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.445898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.455780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.455840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.455853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.455860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.455866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.455880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.465805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.465911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.465924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.465931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.465937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.465957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.475868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.475925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.475939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.475946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.475956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.475971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.485843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.485930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.485950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.485957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.485962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.485977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.495893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.495958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.495972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.495979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.495985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.495999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.505920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.505979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.505993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.506000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.506006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.506021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.515926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.516001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.516015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.516022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.516027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.516044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.526047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.548 [2024-11-26 07:38:10.526133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.548 [2024-11-26 07:38:10.526146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.548 [2024-11-26 07:38:10.526152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.548 [2024-11-26 07:38:10.526158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.548 [2024-11-26 07:38:10.526176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.548 qpair failed and we were unable to recover it. 00:28:42.548 [2024-11-26 07:38:10.535988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.549 [2024-11-26 07:38:10.536072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.549 [2024-11-26 07:38:10.536085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.549 [2024-11-26 07:38:10.536091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.549 [2024-11-26 07:38:10.536097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.549 [2024-11-26 07:38:10.536112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.549 qpair failed and we were unable to recover it. 00:28:42.549 [2024-11-26 07:38:10.546038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.549 [2024-11-26 07:38:10.546095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.549 [2024-11-26 07:38:10.546108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.549 [2024-11-26 07:38:10.546114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.549 [2024-11-26 07:38:10.546120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.549 [2024-11-26 07:38:10.546135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.549 qpair failed and we were unable to recover it. 00:28:42.549 [2024-11-26 07:38:10.556071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.549 [2024-11-26 07:38:10.556127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.549 [2024-11-26 07:38:10.556140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.549 [2024-11-26 07:38:10.556147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.549 [2024-11-26 07:38:10.556153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.549 [2024-11-26 07:38:10.556167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.549 qpair failed and we were unable to recover it. 00:28:42.549 [2024-11-26 07:38:10.566084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.549 [2024-11-26 07:38:10.566173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.549 [2024-11-26 07:38:10.566187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.549 [2024-11-26 07:38:10.566193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.549 [2024-11-26 07:38:10.566199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.549 [2024-11-26 07:38:10.566213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.549 qpair failed and we were unable to recover it. 00:28:42.549 [2024-11-26 07:38:10.576123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.549 [2024-11-26 07:38:10.576181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.549 [2024-11-26 07:38:10.576194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.549 [2024-11-26 07:38:10.576201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.549 [2024-11-26 07:38:10.576207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.549 [2024-11-26 07:38:10.576221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.549 qpair failed and we were unable to recover it. 00:28:42.549 [2024-11-26 07:38:10.586196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.549 [2024-11-26 07:38:10.586263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.549 [2024-11-26 07:38:10.586277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.549 [2024-11-26 07:38:10.586283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.549 [2024-11-26 07:38:10.586289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.549 [2024-11-26 07:38:10.586304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.549 qpair failed and we were unable to recover it. 00:28:42.549 [2024-11-26 07:38:10.596214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.549 [2024-11-26 07:38:10.596309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.549 [2024-11-26 07:38:10.596322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.549 [2024-11-26 07:38:10.596328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.549 [2024-11-26 07:38:10.596334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.549 [2024-11-26 07:38:10.596348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.549 qpair failed and we were unable to recover it. 00:28:42.549 [2024-11-26 07:38:10.606204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.549 [2024-11-26 07:38:10.606276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.549 [2024-11-26 07:38:10.606290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.549 [2024-11-26 07:38:10.606296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.549 [2024-11-26 07:38:10.606302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.549 [2024-11-26 07:38:10.606317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.549 qpair failed and we were unable to recover it. 00:28:42.549 [2024-11-26 07:38:10.616239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.549 [2024-11-26 07:38:10.616296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.549 [2024-11-26 07:38:10.616313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.549 [2024-11-26 07:38:10.616319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.549 [2024-11-26 07:38:10.616325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.549 [2024-11-26 07:38:10.616340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.549 qpair failed and we were unable to recover it. 00:28:42.549 [2024-11-26 07:38:10.626311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.549 [2024-11-26 07:38:10.626400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.549 [2024-11-26 07:38:10.626414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.549 [2024-11-26 07:38:10.626420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.549 [2024-11-26 07:38:10.626426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.549 [2024-11-26 07:38:10.626440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.549 qpair failed and we were unable to recover it. 00:28:42.549 [2024-11-26 07:38:10.636300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.549 [2024-11-26 07:38:10.636357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.549 [2024-11-26 07:38:10.636370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.549 [2024-11-26 07:38:10.636377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.549 [2024-11-26 07:38:10.636383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.549 [2024-11-26 07:38:10.636397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.549 qpair failed and we were unable to recover it. 00:28:42.808 [2024-11-26 07:38:10.646373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.808 [2024-11-26 07:38:10.646452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.808 [2024-11-26 07:38:10.646465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.808 [2024-11-26 07:38:10.646471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.808 [2024-11-26 07:38:10.646477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.808 [2024-11-26 07:38:10.646491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.808 qpair failed and we were unable to recover it. 00:28:42.808 [2024-11-26 07:38:10.656355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.808 [2024-11-26 07:38:10.656413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.808 [2024-11-26 07:38:10.656426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.808 [2024-11-26 07:38:10.656433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.808 [2024-11-26 07:38:10.656441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.808 [2024-11-26 07:38:10.656456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.808 qpair failed and we were unable to recover it. 00:28:42.808 [2024-11-26 07:38:10.666366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.808 [2024-11-26 07:38:10.666424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.808 [2024-11-26 07:38:10.666438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.808 [2024-11-26 07:38:10.666444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.808 [2024-11-26 07:38:10.666450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.808 [2024-11-26 07:38:10.666464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.808 qpair failed and we were unable to recover it. 00:28:42.808 [2024-11-26 07:38:10.676412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.808 [2024-11-26 07:38:10.676466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.808 [2024-11-26 07:38:10.676479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.808 [2024-11-26 07:38:10.676486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.808 [2024-11-26 07:38:10.676492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.808 [2024-11-26 07:38:10.676507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.808 qpair failed and we were unable to recover it. 00:28:42.808 [2024-11-26 07:38:10.686433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.808 [2024-11-26 07:38:10.686484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.808 [2024-11-26 07:38:10.686498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.808 [2024-11-26 07:38:10.686504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.808 [2024-11-26 07:38:10.686510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.808 [2024-11-26 07:38:10.686524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.808 qpair failed and we were unable to recover it. 00:28:42.808 [2024-11-26 07:38:10.696522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.808 [2024-11-26 07:38:10.696579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.808 [2024-11-26 07:38:10.696593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.808 [2024-11-26 07:38:10.696599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.808 [2024-11-26 07:38:10.696605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.808 [2024-11-26 07:38:10.696619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.808 qpair failed and we were unable to recover it. 00:28:42.808 [2024-11-26 07:38:10.706560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.808 [2024-11-26 07:38:10.706616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.808 [2024-11-26 07:38:10.706630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.808 [2024-11-26 07:38:10.706636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.808 [2024-11-26 07:38:10.706643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.706657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.716523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.809 [2024-11-26 07:38:10.716574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.809 [2024-11-26 07:38:10.716587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.809 [2024-11-26 07:38:10.716594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.809 [2024-11-26 07:38:10.716599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.716614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.726541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.809 [2024-11-26 07:38:10.726591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.809 [2024-11-26 07:38:10.726603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.809 [2024-11-26 07:38:10.726610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.809 [2024-11-26 07:38:10.726616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.726631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.736601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.809 [2024-11-26 07:38:10.736666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.809 [2024-11-26 07:38:10.736681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.809 [2024-11-26 07:38:10.736687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.809 [2024-11-26 07:38:10.736693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.736708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.746636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.809 [2024-11-26 07:38:10.746693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.809 [2024-11-26 07:38:10.746709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.809 [2024-11-26 07:38:10.746716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.809 [2024-11-26 07:38:10.746721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.746736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.756646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.809 [2024-11-26 07:38:10.756701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.809 [2024-11-26 07:38:10.756715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.809 [2024-11-26 07:38:10.756721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.809 [2024-11-26 07:38:10.756727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.756741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.766703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.809 [2024-11-26 07:38:10.766754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.809 [2024-11-26 07:38:10.766769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.809 [2024-11-26 07:38:10.766775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.809 [2024-11-26 07:38:10.766781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.766796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.776739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.809 [2024-11-26 07:38:10.776818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.809 [2024-11-26 07:38:10.776832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.809 [2024-11-26 07:38:10.776838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.809 [2024-11-26 07:38:10.776844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.776858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.786750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.809 [2024-11-26 07:38:10.786813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.809 [2024-11-26 07:38:10.786827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.809 [2024-11-26 07:38:10.786838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.809 [2024-11-26 07:38:10.786845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.786860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.796771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.809 [2024-11-26 07:38:10.796824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.809 [2024-11-26 07:38:10.796838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.809 [2024-11-26 07:38:10.796844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.809 [2024-11-26 07:38:10.796850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.796864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.806801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.809 [2024-11-26 07:38:10.806853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.809 [2024-11-26 07:38:10.806866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.809 [2024-11-26 07:38:10.806872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.809 [2024-11-26 07:38:10.806878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.806893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.816826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.809 [2024-11-26 07:38:10.816884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.809 [2024-11-26 07:38:10.816897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.809 [2024-11-26 07:38:10.816903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.809 [2024-11-26 07:38:10.816909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.809 [2024-11-26 07:38:10.816923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.809 qpair failed and we were unable to recover it. 00:28:42.809 [2024-11-26 07:38:10.826856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.810 [2024-11-26 07:38:10.826913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.810 [2024-11-26 07:38:10.826926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.810 [2024-11-26 07:38:10.826932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.810 [2024-11-26 07:38:10.826938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.810 [2024-11-26 07:38:10.826956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.810 qpair failed and we were unable to recover it. 00:28:42.810 [2024-11-26 07:38:10.836864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.810 [2024-11-26 07:38:10.836917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.810 [2024-11-26 07:38:10.836930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.810 [2024-11-26 07:38:10.836936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.810 [2024-11-26 07:38:10.836943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.810 [2024-11-26 07:38:10.836962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.810 qpair failed and we were unable to recover it. 00:28:42.810 [2024-11-26 07:38:10.846938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.810 [2024-11-26 07:38:10.846991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.810 [2024-11-26 07:38:10.847004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.810 [2024-11-26 07:38:10.847010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.810 [2024-11-26 07:38:10.847016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.810 [2024-11-26 07:38:10.847031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.810 qpair failed and we were unable to recover it. 00:28:42.810 [2024-11-26 07:38:10.856934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.810 [2024-11-26 07:38:10.857008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.810 [2024-11-26 07:38:10.857022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.810 [2024-11-26 07:38:10.857029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.810 [2024-11-26 07:38:10.857035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.810 [2024-11-26 07:38:10.857049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.810 qpair failed and we were unable to recover it. 00:28:42.810 [2024-11-26 07:38:10.866957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.810 [2024-11-26 07:38:10.867017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.810 [2024-11-26 07:38:10.867031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.810 [2024-11-26 07:38:10.867037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.810 [2024-11-26 07:38:10.867043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.810 [2024-11-26 07:38:10.867058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.810 qpair failed and we were unable to recover it. 00:28:42.810 [2024-11-26 07:38:10.876988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.810 [2024-11-26 07:38:10.877052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.810 [2024-11-26 07:38:10.877067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.810 [2024-11-26 07:38:10.877074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.810 [2024-11-26 07:38:10.877079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.810 [2024-11-26 07:38:10.877095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.810 qpair failed and we were unable to recover it. 00:28:42.810 [2024-11-26 07:38:10.887015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.810 [2024-11-26 07:38:10.887071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.810 [2024-11-26 07:38:10.887085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.810 [2024-11-26 07:38:10.887092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.810 [2024-11-26 07:38:10.887098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.810 [2024-11-26 07:38:10.887113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.810 qpair failed and we were unable to recover it. 00:28:42.810 [2024-11-26 07:38:10.897032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.810 [2024-11-26 07:38:10.897091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.810 [2024-11-26 07:38:10.897104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.810 [2024-11-26 07:38:10.897110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.810 [2024-11-26 07:38:10.897116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:42.810 [2024-11-26 07:38:10.897130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.810 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:10.907084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:10.907144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:10.907157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:10.907163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:10.907169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:10.907183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.070 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:10.917240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:10.917350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:10.917364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:10.917376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:10.917382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:10.917396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.070 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:10.927171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:10.927228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:10.927241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:10.927247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:10.927253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:10.927268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.070 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:10.937200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:10.937259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:10.937272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:10.937279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:10.937285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:10.937299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.070 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:10.947240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:10.947300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:10.947313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:10.947320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:10.947325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:10.947340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.070 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:10.957211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:10.957266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:10.957280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:10.957286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:10.957292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:10.957310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.070 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:10.967235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:10.967289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:10.967302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:10.967308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:10.967315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:10.967329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.070 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:10.977279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:10.977339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:10.977352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:10.977358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:10.977364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:10.977378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.070 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:10.987308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:10.987365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:10.987378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:10.987384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:10.987390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:10.987405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.070 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:10.997337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:10.997417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:10.997430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:10.997436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:10.997443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:10.997457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.070 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:11.007390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:11.007447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:11.007461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:11.007467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:11.007473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:11.007488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.070 qpair failed and we were unable to recover it. 00:28:43.070 [2024-11-26 07:38:11.017406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.070 [2024-11-26 07:38:11.017465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.070 [2024-11-26 07:38:11.017478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.070 [2024-11-26 07:38:11.017485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.070 [2024-11-26 07:38:11.017491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.070 [2024-11-26 07:38:11.017506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.027424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.027482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.027495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.027501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.027507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.027522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.037457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.037511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.037524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.037530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.037536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.037550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.047486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.047541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.047557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.047564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.047570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.047584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.057540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.057599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.057613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.057619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.057625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.057640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.067538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.067599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.067612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.067619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.067625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.067639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.077568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.077627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.077640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.077647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.077653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.077668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.087536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.087591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.087603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.087610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.087615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.087633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.097636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.097697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.097710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.097716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.097722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.097737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.107666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.107724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.107738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.107745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.107751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.107766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.117715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.117817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.117831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.117838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.117844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.117858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.127722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.127780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.127795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.127802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.127808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.127822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.137757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.137816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.137830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.137836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.137842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.137856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.071 [2024-11-26 07:38:11.147825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.071 [2024-11-26 07:38:11.147885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.071 [2024-11-26 07:38:11.147899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.071 [2024-11-26 07:38:11.147906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.071 [2024-11-26 07:38:11.147912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.071 [2024-11-26 07:38:11.147926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.071 qpair failed and we were unable to recover it. 00:28:43.072 [2024-11-26 07:38:11.157802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.072 [2024-11-26 07:38:11.157859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.072 [2024-11-26 07:38:11.157873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.072 [2024-11-26 07:38:11.157879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.072 [2024-11-26 07:38:11.157885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.072 [2024-11-26 07:38:11.157900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.072 qpair failed and we were unable to recover it. 00:28:43.331 [2024-11-26 07:38:11.167864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.331 [2024-11-26 07:38:11.167922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.331 [2024-11-26 07:38:11.167936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.331 [2024-11-26 07:38:11.167942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.331 [2024-11-26 07:38:11.167952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.331 [2024-11-26 07:38:11.167967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.331 qpair failed and we were unable to recover it. 00:28:43.331 [2024-11-26 07:38:11.177859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.331 [2024-11-26 07:38:11.177914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.331 [2024-11-26 07:38:11.177931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.331 [2024-11-26 07:38:11.177937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.331 [2024-11-26 07:38:11.177943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.331 [2024-11-26 07:38:11.177961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.331 qpair failed and we were unable to recover it. 00:28:43.331 [2024-11-26 07:38:11.187900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.331 [2024-11-26 07:38:11.187959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.331 [2024-11-26 07:38:11.187973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.331 [2024-11-26 07:38:11.187979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.331 [2024-11-26 07:38:11.187985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.331 [2024-11-26 07:38:11.188000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.331 qpair failed and we were unable to recover it. 00:28:43.331 [2024-11-26 07:38:11.197921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.331 [2024-11-26 07:38:11.198011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.331 [2024-11-26 07:38:11.198025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.331 [2024-11-26 07:38:11.198031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.331 [2024-11-26 07:38:11.198037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.331 [2024-11-26 07:38:11.198052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.331 qpair failed and we were unable to recover it. 00:28:43.331 [2024-11-26 07:38:11.207939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.331 [2024-11-26 07:38:11.207996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.331 [2024-11-26 07:38:11.208010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.331 [2024-11-26 07:38:11.208017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.331 [2024-11-26 07:38:11.208023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.331 [2024-11-26 07:38:11.208037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.331 qpair failed and we were unable to recover it. 00:28:43.331 [2024-11-26 07:38:11.217985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.331 [2024-11-26 07:38:11.218043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.331 [2024-11-26 07:38:11.218057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.331 [2024-11-26 07:38:11.218064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.331 [2024-11-26 07:38:11.218073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.331 [2024-11-26 07:38:11.218088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.331 qpair failed and we were unable to recover it. 00:28:43.331 [2024-11-26 07:38:11.228021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.331 [2024-11-26 07:38:11.228073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.331 [2024-11-26 07:38:11.228086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.331 [2024-11-26 07:38:11.228093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.331 [2024-11-26 07:38:11.228099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.228114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.238030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.238101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.238114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.238120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.238126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.238141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.248059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.248113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.248128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.248134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.248140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.248155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.258104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.258184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.258197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.258204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.258210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.258225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.268139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.268198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.268212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.268218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.268224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.268239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.278145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.278202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.278216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.278223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.278230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.278244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.288176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.288250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.288263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.288270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.288276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.288290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.298222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.298296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.298309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.298315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.298321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.298335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.308176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.308231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.308248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.308255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.308261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.308276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.318271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.318325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.318338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.318344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.318350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.318365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.328301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.328370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.328384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.328390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.328396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.328411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.338247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.338302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.338315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.338322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.338327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.338342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.348319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.348401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.348415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.348425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.348431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.348446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.332 qpair failed and we were unable to recover it. 00:28:43.332 [2024-11-26 07:38:11.358319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.332 [2024-11-26 07:38:11.358372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.332 [2024-11-26 07:38:11.358385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.332 [2024-11-26 07:38:11.358391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.332 [2024-11-26 07:38:11.358397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.332 [2024-11-26 07:38:11.358412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.333 qpair failed and we were unable to recover it. 00:28:43.333 [2024-11-26 07:38:11.368405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.333 [2024-11-26 07:38:11.368475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.333 [2024-11-26 07:38:11.368489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.333 [2024-11-26 07:38:11.368496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.333 [2024-11-26 07:38:11.368501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.333 [2024-11-26 07:38:11.368516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.333 qpair failed and we were unable to recover it. 00:28:43.333 [2024-11-26 07:38:11.378371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.333 [2024-11-26 07:38:11.378429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.333 [2024-11-26 07:38:11.378443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.333 [2024-11-26 07:38:11.378449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.333 [2024-11-26 07:38:11.378455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.333 [2024-11-26 07:38:11.378470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.333 qpair failed and we were unable to recover it. 00:28:43.333 [2024-11-26 07:38:11.388453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.333 [2024-11-26 07:38:11.388512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.333 [2024-11-26 07:38:11.388526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.333 [2024-11-26 07:38:11.388532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.333 [2024-11-26 07:38:11.388538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.333 [2024-11-26 07:38:11.388553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.333 qpair failed and we were unable to recover it. 00:28:43.333 [2024-11-26 07:38:11.398463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.333 [2024-11-26 07:38:11.398518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.333 [2024-11-26 07:38:11.398532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.333 [2024-11-26 07:38:11.398539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.333 [2024-11-26 07:38:11.398545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.333 [2024-11-26 07:38:11.398559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.333 qpair failed and we were unable to recover it. 00:28:43.333 [2024-11-26 07:38:11.408539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.333 [2024-11-26 07:38:11.408592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.333 [2024-11-26 07:38:11.408605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.333 [2024-11-26 07:38:11.408611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.333 [2024-11-26 07:38:11.408617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.333 [2024-11-26 07:38:11.408632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.333 qpair failed and we were unable to recover it. 00:28:43.333 [2024-11-26 07:38:11.418532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.333 [2024-11-26 07:38:11.418591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.333 [2024-11-26 07:38:11.418604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.333 [2024-11-26 07:38:11.418611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.333 [2024-11-26 07:38:11.418617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.333 [2024-11-26 07:38:11.418631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.333 qpair failed and we were unable to recover it. 00:28:43.593 [2024-11-26 07:38:11.428600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.593 [2024-11-26 07:38:11.428689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.593 [2024-11-26 07:38:11.428702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.593 [2024-11-26 07:38:11.428709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.593 [2024-11-26 07:38:11.428714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.593 [2024-11-26 07:38:11.428729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.593 qpair failed and we were unable to recover it. 00:28:43.593 [2024-11-26 07:38:11.438532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.593 [2024-11-26 07:38:11.438592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.593 [2024-11-26 07:38:11.438605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.593 [2024-11-26 07:38:11.438612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.593 [2024-11-26 07:38:11.438618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.593 [2024-11-26 07:38:11.438632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.593 qpair failed and we were unable to recover it. 00:28:43.593 [2024-11-26 07:38:11.448547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.593 [2024-11-26 07:38:11.448596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.593 [2024-11-26 07:38:11.448609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.593 [2024-11-26 07:38:11.448616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.593 [2024-11-26 07:38:11.448621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.593 [2024-11-26 07:38:11.448635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.593 qpair failed and we were unable to recover it. 00:28:43.593 [2024-11-26 07:38:11.458604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.593 [2024-11-26 07:38:11.458663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.593 [2024-11-26 07:38:11.458677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.593 [2024-11-26 07:38:11.458684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.593 [2024-11-26 07:38:11.458689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.593 [2024-11-26 07:38:11.458704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.593 qpair failed and we were unable to recover it. 00:28:43.593 [2024-11-26 07:38:11.468624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.593 [2024-11-26 07:38:11.468677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.593 [2024-11-26 07:38:11.468690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.593 [2024-11-26 07:38:11.468697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.593 [2024-11-26 07:38:11.468702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.593 [2024-11-26 07:38:11.468717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.593 qpair failed and we were unable to recover it. 00:28:43.593 [2024-11-26 07:38:11.478631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.593 [2024-11-26 07:38:11.478688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.593 [2024-11-26 07:38:11.478702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.593 [2024-11-26 07:38:11.478712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.593 [2024-11-26 07:38:11.478718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.593 [2024-11-26 07:38:11.478733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.593 qpair failed and we were unable to recover it. 00:28:43.593 [2024-11-26 07:38:11.488761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.593 [2024-11-26 07:38:11.488817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.593 [2024-11-26 07:38:11.488830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.593 [2024-11-26 07:38:11.488837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.593 [2024-11-26 07:38:11.488843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.593 [2024-11-26 07:38:11.488858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.593 qpair failed and we were unable to recover it. 00:28:43.593 [2024-11-26 07:38:11.498709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.593 [2024-11-26 07:38:11.498767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.593 [2024-11-26 07:38:11.498780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.593 [2024-11-26 07:38:11.498786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.593 [2024-11-26 07:38:11.498792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.593 [2024-11-26 07:38:11.498807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.593 qpair failed and we were unable to recover it. 00:28:43.593 [2024-11-26 07:38:11.508780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.593 [2024-11-26 07:38:11.508839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.593 [2024-11-26 07:38:11.508852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.593 [2024-11-26 07:38:11.508858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.508864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.508879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.518751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.518809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.518823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.518830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.518836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.518853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.528840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.528891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.528905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.528911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.528917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.528931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.538888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.538961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.538975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.538981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.538987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.539002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.548838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.548897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.548911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.548917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.548923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.548938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.558864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.558918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.558931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.558938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.558944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.558963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.568972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.569054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.569068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.569075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.569081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.569096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.578998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.579056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.579070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.579076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.579082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.579096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.589029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.589086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.589100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.589106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.589112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.589127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.599051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.599152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.599165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.599171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.599177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.599191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.609122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.609207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.609223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.609230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.609235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.609249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.619039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.619095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.619108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.619114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.619120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.619134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.629080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.629134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.629148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.629155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.629161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.594 [2024-11-26 07:38:11.629175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.594 qpair failed and we were unable to recover it. 00:28:43.594 [2024-11-26 07:38:11.639175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.594 [2024-11-26 07:38:11.639229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.594 [2024-11-26 07:38:11.639242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.594 [2024-11-26 07:38:11.639248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.594 [2024-11-26 07:38:11.639253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.595 [2024-11-26 07:38:11.639268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.595 qpair failed and we were unable to recover it. 00:28:43.595 [2024-11-26 07:38:11.649187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.595 [2024-11-26 07:38:11.649242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.595 [2024-11-26 07:38:11.649255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.595 [2024-11-26 07:38:11.649261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.595 [2024-11-26 07:38:11.649272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.595 [2024-11-26 07:38:11.649287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.595 qpair failed and we were unable to recover it. 00:28:43.595 [2024-11-26 07:38:11.659235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.595 [2024-11-26 07:38:11.659294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.595 [2024-11-26 07:38:11.659308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.595 [2024-11-26 07:38:11.659314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.595 [2024-11-26 07:38:11.659320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.595 [2024-11-26 07:38:11.659334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.595 qpair failed and we were unable to recover it. 00:28:43.595 [2024-11-26 07:38:11.669187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.595 [2024-11-26 07:38:11.669245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.595 [2024-11-26 07:38:11.669258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.595 [2024-11-26 07:38:11.669265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.595 [2024-11-26 07:38:11.669271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.595 [2024-11-26 07:38:11.669285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.595 qpair failed and we were unable to recover it. 00:28:43.595 [2024-11-26 07:38:11.679222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.595 [2024-11-26 07:38:11.679273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.595 [2024-11-26 07:38:11.679287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.595 [2024-11-26 07:38:11.679293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.595 [2024-11-26 07:38:11.679300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.595 [2024-11-26 07:38:11.679315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.595 qpair failed and we were unable to recover it. 00:28:43.854 [2024-11-26 07:38:11.689250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.854 [2024-11-26 07:38:11.689338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.854 [2024-11-26 07:38:11.689351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.854 [2024-11-26 07:38:11.689357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.854 [2024-11-26 07:38:11.689363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.854 [2024-11-26 07:38:11.689378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.854 qpair failed and we were unable to recover it. 00:28:43.854 [2024-11-26 07:38:11.699329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.854 [2024-11-26 07:38:11.699389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.854 [2024-11-26 07:38:11.699403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.854 [2024-11-26 07:38:11.699410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.854 [2024-11-26 07:38:11.699416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.854 [2024-11-26 07:38:11.699431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.854 qpair failed and we were unable to recover it. 00:28:43.854 [2024-11-26 07:38:11.709354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.854 [2024-11-26 07:38:11.709413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.854 [2024-11-26 07:38:11.709427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.854 [2024-11-26 07:38:11.709433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.709439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.709454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.719322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.719377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.719392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.719398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.719404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.719418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.729433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.729487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.729500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.729507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.729513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.729527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.739388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.739442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.739460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.739467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.739473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.739487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.749512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.749568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.749581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.749587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.749593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.749607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.759531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.759587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.759600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.759607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.759613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.759627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.769533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.769607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.769620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.769626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.769632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.769646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.779609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.779667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.779680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.779686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.779695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.779710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.789603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.789663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.789678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.789684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.789690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.789704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.799671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.799728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.799741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.799747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.799753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.799768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.809662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.809745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.809758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.809765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.809770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.809785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.819691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.819748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.819761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.819768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.819774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.819788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.829722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.829775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.829789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.829795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.829801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.855 [2024-11-26 07:38:11.829816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.855 qpair failed and we were unable to recover it. 00:28:43.855 [2024-11-26 07:38:11.839784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.855 [2024-11-26 07:38:11.839846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.855 [2024-11-26 07:38:11.839860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.855 [2024-11-26 07:38:11.839866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.855 [2024-11-26 07:38:11.839872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.856 [2024-11-26 07:38:11.839887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.856 qpair failed and we were unable to recover it. 00:28:43.856 [2024-11-26 07:38:11.849767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.856 [2024-11-26 07:38:11.849824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.856 [2024-11-26 07:38:11.849838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.856 [2024-11-26 07:38:11.849844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.856 [2024-11-26 07:38:11.849850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.856 [2024-11-26 07:38:11.849865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.856 qpair failed and we were unable to recover it. 00:28:43.856 [2024-11-26 07:38:11.859815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.856 [2024-11-26 07:38:11.859874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.856 [2024-11-26 07:38:11.859888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.856 [2024-11-26 07:38:11.859894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.856 [2024-11-26 07:38:11.859900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.856 [2024-11-26 07:38:11.859915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.856 qpair failed and we were unable to recover it. 00:28:43.856 [2024-11-26 07:38:11.869827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.856 [2024-11-26 07:38:11.869880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.856 [2024-11-26 07:38:11.869897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.856 [2024-11-26 07:38:11.869903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.856 [2024-11-26 07:38:11.869909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.856 [2024-11-26 07:38:11.869924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.856 qpair failed and we were unable to recover it. 00:28:43.856 [2024-11-26 07:38:11.879853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.856 [2024-11-26 07:38:11.879908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.856 [2024-11-26 07:38:11.879922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.856 [2024-11-26 07:38:11.879929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.856 [2024-11-26 07:38:11.879935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.856 [2024-11-26 07:38:11.879954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.856 qpair failed and we were unable to recover it. 00:28:43.856 [2024-11-26 07:38:11.889884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.856 [2024-11-26 07:38:11.889932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.856 [2024-11-26 07:38:11.889945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.856 [2024-11-26 07:38:11.889955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.856 [2024-11-26 07:38:11.889961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.856 [2024-11-26 07:38:11.889975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.856 qpair failed and we were unable to recover it. 00:28:43.856 [2024-11-26 07:38:11.899904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.856 [2024-11-26 07:38:11.899963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.856 [2024-11-26 07:38:11.899977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.856 [2024-11-26 07:38:11.899983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.856 [2024-11-26 07:38:11.899989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.856 [2024-11-26 07:38:11.900003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.856 qpair failed and we were unable to recover it. 00:28:43.856 [2024-11-26 07:38:11.909950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.856 [2024-11-26 07:38:11.910007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.856 [2024-11-26 07:38:11.910021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.856 [2024-11-26 07:38:11.910030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.856 [2024-11-26 07:38:11.910036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.856 [2024-11-26 07:38:11.910051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.856 qpair failed and we were unable to recover it. 00:28:43.856 [2024-11-26 07:38:11.919970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.856 [2024-11-26 07:38:11.920027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.856 [2024-11-26 07:38:11.920041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.856 [2024-11-26 07:38:11.920047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.856 [2024-11-26 07:38:11.920053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.856 [2024-11-26 07:38:11.920068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.856 qpair failed and we were unable to recover it. 00:28:43.856 [2024-11-26 07:38:11.930060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.856 [2024-11-26 07:38:11.930114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.856 [2024-11-26 07:38:11.930128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.856 [2024-11-26 07:38:11.930134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.856 [2024-11-26 07:38:11.930141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.856 [2024-11-26 07:38:11.930155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.856 qpair failed and we were unable to recover it. 00:28:43.856 [2024-11-26 07:38:11.940045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.856 [2024-11-26 07:38:11.940105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.856 [2024-11-26 07:38:11.940119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.856 [2024-11-26 07:38:11.940126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.856 [2024-11-26 07:38:11.940132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:43.856 [2024-11-26 07:38:11.940147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.856 qpair failed and we were unable to recover it. 00:28:44.116 [2024-11-26 07:38:11.950100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.116 [2024-11-26 07:38:11.950156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.116 [2024-11-26 07:38:11.950169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.116 [2024-11-26 07:38:11.950175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.116 [2024-11-26 07:38:11.950181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.116 [2024-11-26 07:38:11.950196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.116 qpair failed and we were unable to recover it. 00:28:44.116 [2024-11-26 07:38:11.960093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.116 [2024-11-26 07:38:11.960146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.116 [2024-11-26 07:38:11.960160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.116 [2024-11-26 07:38:11.960166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.116 [2024-11-26 07:38:11.960172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.116 [2024-11-26 07:38:11.960187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.116 qpair failed and we were unable to recover it. 00:28:44.116 [2024-11-26 07:38:11.970113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.116 [2024-11-26 07:38:11.970169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.116 [2024-11-26 07:38:11.970182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.116 [2024-11-26 07:38:11.970189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.116 [2024-11-26 07:38:11.970195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.116 [2024-11-26 07:38:11.970210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.116 qpair failed and we were unable to recover it. 00:28:44.116 [2024-11-26 07:38:11.980158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.116 [2024-11-26 07:38:11.980239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.116 [2024-11-26 07:38:11.980253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.116 [2024-11-26 07:38:11.980259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.116 [2024-11-26 07:38:11.980265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.116 [2024-11-26 07:38:11.980280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.116 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:11.990181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:11.990236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:11.990250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:11.990256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:11.990262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:11.990277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.000209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.000270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.000284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.000290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.000296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.000311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.010225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.010285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.010299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.010305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.010311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.010326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.020294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.020380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.020393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.020400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.020406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.020421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.030234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.030292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.030305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.030312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.030318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.030332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.040331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.040386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.040399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.040409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.040415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.040430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.050384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.050437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.050451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.050457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.050463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.050478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.060400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.060456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.060469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.060476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.060482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.060496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.070454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.070512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.070526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.070532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.070538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.070553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.080440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.080498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.080511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.080518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.080524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.080541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.090471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.090526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.090540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.090546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.090552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.090567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.100498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.100555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.100569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.100575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.100581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.100596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.110525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.117 [2024-11-26 07:38:12.110657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.117 [2024-11-26 07:38:12.110672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.117 [2024-11-26 07:38:12.110679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.117 [2024-11-26 07:38:12.110685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.117 [2024-11-26 07:38:12.110700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.117 qpair failed and we were unable to recover it. 00:28:44.117 [2024-11-26 07:38:12.120554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.118 [2024-11-26 07:38:12.120607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.118 [2024-11-26 07:38:12.120620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.118 [2024-11-26 07:38:12.120627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.118 [2024-11-26 07:38:12.120632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:44.118 [2024-11-26 07:38:12.120648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.118 qpair failed and we were unable to recover it. 00:28:44.118 [2024-11-26 07:38:12.130564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.118 [2024-11-26 07:38:12.130634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.118 [2024-11-26 07:38:12.130662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.118 [2024-11-26 07:38:12.130674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.118 [2024-11-26 07:38:12.130683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.118 [2024-11-26 07:38:12.130708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.118 qpair failed and we were unable to recover it. 00:28:44.118 [2024-11-26 07:38:12.140612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.118 [2024-11-26 07:38:12.140671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.118 [2024-11-26 07:38:12.140687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.118 [2024-11-26 07:38:12.140693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.118 [2024-11-26 07:38:12.140700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.118 [2024-11-26 07:38:12.140715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.118 qpair failed and we were unable to recover it. 00:28:44.118 [2024-11-26 07:38:12.150640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.118 [2024-11-26 07:38:12.150706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.118 [2024-11-26 07:38:12.150722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.118 [2024-11-26 07:38:12.150728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.118 [2024-11-26 07:38:12.150734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.118 [2024-11-26 07:38:12.150749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.118 qpair failed and we were unable to recover it. 00:28:44.118 [2024-11-26 07:38:12.160668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.118 [2024-11-26 07:38:12.160727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.118 [2024-11-26 07:38:12.160742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.118 [2024-11-26 07:38:12.160749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.118 [2024-11-26 07:38:12.160754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.118 [2024-11-26 07:38:12.160769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.118 qpair failed and we were unable to recover it. 00:28:44.118 [2024-11-26 07:38:12.170726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.118 [2024-11-26 07:38:12.170778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.118 [2024-11-26 07:38:12.170797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.118 [2024-11-26 07:38:12.170804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.118 [2024-11-26 07:38:12.170810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.118 [2024-11-26 07:38:12.170825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.118 qpair failed and we were unable to recover it. 00:28:44.118 [2024-11-26 07:38:12.180711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.118 [2024-11-26 07:38:12.180770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.118 [2024-11-26 07:38:12.180785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.118 [2024-11-26 07:38:12.180792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.118 [2024-11-26 07:38:12.180798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.118 [2024-11-26 07:38:12.180812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.118 qpair failed and we were unable to recover it. 00:28:44.118 [2024-11-26 07:38:12.190732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.118 [2024-11-26 07:38:12.190790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.118 [2024-11-26 07:38:12.190804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.118 [2024-11-26 07:38:12.190811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.118 [2024-11-26 07:38:12.190817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.118 [2024-11-26 07:38:12.190832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.118 qpair failed and we were unable to recover it. 00:28:44.118 [2024-11-26 07:38:12.200779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.118 [2024-11-26 07:38:12.200829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.118 [2024-11-26 07:38:12.200844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.118 [2024-11-26 07:38:12.200851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.118 [2024-11-26 07:38:12.200857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.118 [2024-11-26 07:38:12.200871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.118 qpair failed and we were unable to recover it. 00:28:44.378 [2024-11-26 07:38:12.210828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.378 [2024-11-26 07:38:12.210903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.378 [2024-11-26 07:38:12.210919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.378 [2024-11-26 07:38:12.210925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.378 [2024-11-26 07:38:12.210931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.378 [2024-11-26 07:38:12.210953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.378 qpair failed and we were unable to recover it. 00:28:44.378 [2024-11-26 07:38:12.220839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.378 [2024-11-26 07:38:12.220906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.378 [2024-11-26 07:38:12.220920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.378 [2024-11-26 07:38:12.220927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.378 [2024-11-26 07:38:12.220933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.378 [2024-11-26 07:38:12.220952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.378 qpair failed and we were unable to recover it. 00:28:44.378 [2024-11-26 07:38:12.230925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.378 [2024-11-26 07:38:12.230986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.378 [2024-11-26 07:38:12.231001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.378 [2024-11-26 07:38:12.231008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.378 [2024-11-26 07:38:12.231014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.378 [2024-11-26 07:38:12.231029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.378 qpair failed and we were unable to recover it. 00:28:44.378 [2024-11-26 07:38:12.240902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.378 [2024-11-26 07:38:12.240959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.378 [2024-11-26 07:38:12.240973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.378 [2024-11-26 07:38:12.240980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.378 [2024-11-26 07:38:12.240986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.378 [2024-11-26 07:38:12.241000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.378 qpair failed and we were unable to recover it. 00:28:44.378 [2024-11-26 07:38:12.250920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.378 [2024-11-26 07:38:12.250982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.378 [2024-11-26 07:38:12.250997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.378 [2024-11-26 07:38:12.251004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.378 [2024-11-26 07:38:12.251010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.378 [2024-11-26 07:38:12.251024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.378 qpair failed and we were unable to recover it. 00:28:44.378 [2024-11-26 07:38:12.260954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.378 [2024-11-26 07:38:12.261013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.378 [2024-11-26 07:38:12.261027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.378 [2024-11-26 07:38:12.261033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.378 [2024-11-26 07:38:12.261039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.378 [2024-11-26 07:38:12.261053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.378 qpair failed and we were unable to recover it. 00:28:44.378 [2024-11-26 07:38:12.270979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.378 [2024-11-26 07:38:12.271039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.378 [2024-11-26 07:38:12.271053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.378 [2024-11-26 07:38:12.271059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.378 [2024-11-26 07:38:12.271065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.378 [2024-11-26 07:38:12.271080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.378 qpair failed and we were unable to recover it. 00:28:44.378 [2024-11-26 07:38:12.281012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.378 [2024-11-26 07:38:12.281070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.378 [2024-11-26 07:38:12.281084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.378 [2024-11-26 07:38:12.281090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.378 [2024-11-26 07:38:12.281096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.378 [2024-11-26 07:38:12.281111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.378 qpair failed and we were unable to recover it. 00:28:44.378 [2024-11-26 07:38:12.291042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.378 [2024-11-26 07:38:12.291113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.378 [2024-11-26 07:38:12.291127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.378 [2024-11-26 07:38:12.291133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.378 [2024-11-26 07:38:12.291139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.378 [2024-11-26 07:38:12.291153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.378 qpair failed and we were unable to recover it. 00:28:44.378 [2024-11-26 07:38:12.301075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.378 [2024-11-26 07:38:12.301133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.378 [2024-11-26 07:38:12.301150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.378 [2024-11-26 07:38:12.301157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.378 [2024-11-26 07:38:12.301163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.378 [2024-11-26 07:38:12.301177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.378 qpair failed and we were unable to recover it. 00:28:44.378 [2024-11-26 07:38:12.311120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.378 [2024-11-26 07:38:12.311187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.378 [2024-11-26 07:38:12.311201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.378 [2024-11-26 07:38:12.311208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.378 [2024-11-26 07:38:12.311214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.378 [2024-11-26 07:38:12.311228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.378 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.321179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.321238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.321253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.321259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.321265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.379 [2024-11-26 07:38:12.321280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.331097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.331147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.331161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.331167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.331173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.379 [2024-11-26 07:38:12.331187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.341197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.341255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.341269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.341276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.341282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.379 [2024-11-26 07:38:12.341299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.351258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.351330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.351344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.351350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.351356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.379 [2024-11-26 07:38:12.351371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.361258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.361313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.361326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.361333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.361339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.379 [2024-11-26 07:38:12.361353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.371283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.371339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.371353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.371360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.371366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.379 [2024-11-26 07:38:12.371380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.381356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.381464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.381479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.381485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.381491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:44.379 [2024-11-26 07:38:12.381506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.391341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.391412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.391439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.391451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.391460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.379 [2024-11-26 07:38:12.391485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.401365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.401425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.401440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.401448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.401453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.379 [2024-11-26 07:38:12.401469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.411349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.411399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.411414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.411421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.411427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.379 [2024-11-26 07:38:12.411443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.421423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.421483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.421498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.421504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.421510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.379 [2024-11-26 07:38:12.421525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.431441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.431496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.431514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.431520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.431527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.379 [2024-11-26 07:38:12.431541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.441472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.379 [2024-11-26 07:38:12.441528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.379 [2024-11-26 07:38:12.441542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.379 [2024-11-26 07:38:12.441549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.379 [2024-11-26 07:38:12.441555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.379 [2024-11-26 07:38:12.441570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.379 qpair failed and we were unable to recover it. 00:28:44.379 [2024-11-26 07:38:12.451522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.380 [2024-11-26 07:38:12.451572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.380 [2024-11-26 07:38:12.451586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.380 [2024-11-26 07:38:12.451593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.380 [2024-11-26 07:38:12.451599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.380 [2024-11-26 07:38:12.451614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.380 qpair failed and we were unable to recover it. 00:28:44.380 [2024-11-26 07:38:12.461557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.380 [2024-11-26 07:38:12.461614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.380 [2024-11-26 07:38:12.461628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.380 [2024-11-26 07:38:12.461634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.380 [2024-11-26 07:38:12.461640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.380 [2024-11-26 07:38:12.461654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.380 qpair failed and we were unable to recover it. 00:28:44.640 [2024-11-26 07:38:12.471555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.471629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.471644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.471651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.471666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.471681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.481577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.481633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.481647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.481654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.481660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.481675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.491615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.491674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.491689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.491697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.491704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.491720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.501587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.501671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.501685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.501691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.501698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.501712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.511697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.511801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.511815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.511822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.511828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.511842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.521751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.521830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.521844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.521850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.521856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.521871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.531727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.531806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.531820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.531827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.531833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.531848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.541765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.541821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.541836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.541843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.541849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.541864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.551780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.551839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.551853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.551860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.551867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.551882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.561809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.561864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.561881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.561888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.561894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.561909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.571834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.571890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.571905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.571912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.571918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.571934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.581828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.581887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.581901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.581907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.581913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.581928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.641 [2024-11-26 07:38:12.591896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.641 [2024-11-26 07:38:12.591960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.641 [2024-11-26 07:38:12.591976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.641 [2024-11-26 07:38:12.591983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.641 [2024-11-26 07:38:12.591989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.641 [2024-11-26 07:38:12.592004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.641 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.601968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.602024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.602038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.602044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.602054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.602069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.611937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.611993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.612008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.612014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.612020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.612035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.621972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.622033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.622047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.622054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.622060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.622074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.632096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.632154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.632168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.632175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.632181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.632195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.642032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.642089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.642104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.642110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.642116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.642131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.652093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.652148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.652162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.652169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.652175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.652190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.662105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.662164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.662179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.662186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.662192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.662206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.672121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.672175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.672189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.672196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.672202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.672216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.682176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.682231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.682245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.682252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.682258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.682273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.692161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.692226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.692244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.692251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.692256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.692271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.702220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.702279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.702293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.702300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.702306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.702321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.712248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.712308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.712323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.712329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.712335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.712350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.642 qpair failed and we were unable to recover it. 00:28:44.642 [2024-11-26 07:38:12.722311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.642 [2024-11-26 07:38:12.722367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.642 [2024-11-26 07:38:12.722382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.642 [2024-11-26 07:38:12.722389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.642 [2024-11-26 07:38:12.722394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.642 [2024-11-26 07:38:12.722409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.643 qpair failed and we were unable to recover it. 00:28:44.643 [2024-11-26 07:38:12.732286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.643 [2024-11-26 07:38:12.732342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.643 [2024-11-26 07:38:12.732356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.643 [2024-11-26 07:38:12.732366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.643 [2024-11-26 07:38:12.732372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.643 [2024-11-26 07:38:12.732387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.643 qpair failed and we were unable to recover it. 00:28:44.904 [2024-11-26 07:38:12.742278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.904 [2024-11-26 07:38:12.742340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.904 [2024-11-26 07:38:12.742357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.904 [2024-11-26 07:38:12.742364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.904 [2024-11-26 07:38:12.742370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.904 [2024-11-26 07:38:12.742385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.904 qpair failed and we were unable to recover it. 00:28:44.904 [2024-11-26 07:38:12.752348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.904 [2024-11-26 07:38:12.752409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.904 [2024-11-26 07:38:12.752423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.904 [2024-11-26 07:38:12.752430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.904 [2024-11-26 07:38:12.752436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.904 [2024-11-26 07:38:12.752451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.904 qpair failed and we were unable to recover it. 00:28:44.904 [2024-11-26 07:38:12.762357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.904 [2024-11-26 07:38:12.762417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.904 [2024-11-26 07:38:12.762432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.904 [2024-11-26 07:38:12.762438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.904 [2024-11-26 07:38:12.762444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.904 [2024-11-26 07:38:12.762460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.904 qpair failed and we were unable to recover it. 00:28:44.904 [2024-11-26 07:38:12.772414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.904 [2024-11-26 07:38:12.772472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.904 [2024-11-26 07:38:12.772486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.904 [2024-11-26 07:38:12.772493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.904 [2024-11-26 07:38:12.772499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.904 [2024-11-26 07:38:12.772517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.904 qpair failed and we were unable to recover it. 00:28:44.904 [2024-11-26 07:38:12.782443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.905 [2024-11-26 07:38:12.782499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.905 [2024-11-26 07:38:12.782514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.905 [2024-11-26 07:38:12.782520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.905 [2024-11-26 07:38:12.782526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.905 [2024-11-26 07:38:12.782541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.905 qpair failed and we were unable to recover it. 00:28:44.905 [2024-11-26 07:38:12.792477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.905 [2024-11-26 07:38:12.792530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.905 [2024-11-26 07:38:12.792546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.905 [2024-11-26 07:38:12.792553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.905 [2024-11-26 07:38:12.792559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.905 [2024-11-26 07:38:12.792574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.905 qpair failed and we were unable to recover it. 00:28:44.905 [2024-11-26 07:38:12.802503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.905 [2024-11-26 07:38:12.802558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.905 [2024-11-26 07:38:12.802572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.905 [2024-11-26 07:38:12.802578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.905 [2024-11-26 07:38:12.802584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.905 [2024-11-26 07:38:12.802599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.905 qpair failed and we were unable to recover it. 00:28:44.905 [2024-11-26 07:38:12.812556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.905 [2024-11-26 07:38:12.812610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.905 [2024-11-26 07:38:12.812624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.905 [2024-11-26 07:38:12.812631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.905 [2024-11-26 07:38:12.812637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.905 [2024-11-26 07:38:12.812652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.905 qpair failed and we were unable to recover it. 00:28:44.905 [2024-11-26 07:38:12.822548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.905 [2024-11-26 07:38:12.822614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.905 [2024-11-26 07:38:12.822628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.905 [2024-11-26 07:38:12.822635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.905 [2024-11-26 07:38:12.822641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.905 [2024-11-26 07:38:12.822655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.905 qpair failed and we were unable to recover it. 00:28:44.905 [2024-11-26 07:38:12.832557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.905 [2024-11-26 07:38:12.832613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.905 [2024-11-26 07:38:12.832628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.905 [2024-11-26 07:38:12.832634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.905 [2024-11-26 07:38:12.832641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.905 [2024-11-26 07:38:12.832655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.905 qpair failed and we were unable to recover it. 00:28:44.905 [2024-11-26 07:38:12.842650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.905 [2024-11-26 07:38:12.842702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.905 [2024-11-26 07:38:12.842717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.905 [2024-11-26 07:38:12.842723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.905 [2024-11-26 07:38:12.842729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.905 [2024-11-26 07:38:12.842744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.905 qpair failed and we were unable to recover it. 00:28:44.905 [2024-11-26 07:38:12.852661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.905 [2024-11-26 07:38:12.852714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.905 [2024-11-26 07:38:12.852728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.905 [2024-11-26 07:38:12.852735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.905 [2024-11-26 07:38:12.852741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.905 [2024-11-26 07:38:12.852756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.905 qpair failed and we were unable to recover it. 00:28:44.905 [2024-11-26 07:38:12.862659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.905 [2024-11-26 07:38:12.862718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.905 [2024-11-26 07:38:12.862732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.905 [2024-11-26 07:38:12.862742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.905 [2024-11-26 07:38:12.862748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.905 [2024-11-26 07:38:12.862761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.905 qpair failed and we were unable to recover it. 00:28:44.905 [2024-11-26 07:38:12.872669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.905 [2024-11-26 07:38:12.872727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.905 [2024-11-26 07:38:12.872741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.905 [2024-11-26 07:38:12.872748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.905 [2024-11-26 07:38:12.872754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.905 [2024-11-26 07:38:12.872769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.905 qpair failed and we were unable to recover it. 00:28:44.905 [2024-11-26 07:38:12.882745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.905 [2024-11-26 07:38:12.882808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.905 [2024-11-26 07:38:12.882822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.905 [2024-11-26 07:38:12.882829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.905 [2024-11-26 07:38:12.882835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.906 [2024-11-26 07:38:12.882850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.906 qpair failed and we were unable to recover it. 00:28:44.906 [2024-11-26 07:38:12.892750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.906 [2024-11-26 07:38:12.892803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.906 [2024-11-26 07:38:12.892818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.906 [2024-11-26 07:38:12.892825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.906 [2024-11-26 07:38:12.892831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.906 [2024-11-26 07:38:12.892847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.906 qpair failed and we were unable to recover it. 00:28:44.906 [2024-11-26 07:38:12.902785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.906 [2024-11-26 07:38:12.902841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.906 [2024-11-26 07:38:12.902855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.906 [2024-11-26 07:38:12.902862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.906 [2024-11-26 07:38:12.902868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.906 [2024-11-26 07:38:12.902886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.906 qpair failed and we were unable to recover it. 00:28:44.906 [2024-11-26 07:38:12.912856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.906 [2024-11-26 07:38:12.912911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.906 [2024-11-26 07:38:12.912925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.906 [2024-11-26 07:38:12.912932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.906 [2024-11-26 07:38:12.912938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.906 [2024-11-26 07:38:12.912958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.906 qpair failed and we were unable to recover it. 00:28:44.906 [2024-11-26 07:38:12.922905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.906 [2024-11-26 07:38:12.923012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.906 [2024-11-26 07:38:12.923027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.906 [2024-11-26 07:38:12.923033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.906 [2024-11-26 07:38:12.923039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.906 [2024-11-26 07:38:12.923054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.906 qpair failed and we were unable to recover it. 00:28:44.906 [2024-11-26 07:38:12.932886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.906 [2024-11-26 07:38:12.932944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.906 [2024-11-26 07:38:12.932962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.906 [2024-11-26 07:38:12.932969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.906 [2024-11-26 07:38:12.932975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.906 [2024-11-26 07:38:12.932990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.906 qpair failed and we were unable to recover it. 00:28:44.906 [2024-11-26 07:38:12.942942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.906 [2024-11-26 07:38:12.943005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.906 [2024-11-26 07:38:12.943019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.906 [2024-11-26 07:38:12.943026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.906 [2024-11-26 07:38:12.943032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.906 [2024-11-26 07:38:12.943046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.906 qpair failed and we were unable to recover it. 00:28:44.906 [2024-11-26 07:38:12.952962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.906 [2024-11-26 07:38:12.953018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.906 [2024-11-26 07:38:12.953031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.906 [2024-11-26 07:38:12.953038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.906 [2024-11-26 07:38:12.953044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.906 [2024-11-26 07:38:12.953058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.906 qpair failed and we were unable to recover it. 00:28:44.906 [2024-11-26 07:38:12.963018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.906 [2024-11-26 07:38:12.963120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.906 [2024-11-26 07:38:12.963134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.906 [2024-11-26 07:38:12.963141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.906 [2024-11-26 07:38:12.963148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.906 [2024-11-26 07:38:12.963162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.906 qpair failed and we were unable to recover it. 00:28:44.906 [2024-11-26 07:38:12.973031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.906 [2024-11-26 07:38:12.973088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.906 [2024-11-26 07:38:12.973103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.906 [2024-11-26 07:38:12.973111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.906 [2024-11-26 07:38:12.973116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.906 [2024-11-26 07:38:12.973132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.906 qpair failed and we were unable to recover it. 00:28:44.906 [2024-11-26 07:38:12.983023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.906 [2024-11-26 07:38:12.983088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.906 [2024-11-26 07:38:12.983103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.907 [2024-11-26 07:38:12.983109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.907 [2024-11-26 07:38:12.983115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.907 [2024-11-26 07:38:12.983130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.907 qpair failed and we were unable to recover it. 00:28:44.907 [2024-11-26 07:38:12.993062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.907 [2024-11-26 07:38:12.993136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.907 [2024-11-26 07:38:12.993154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.907 [2024-11-26 07:38:12.993161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.907 [2024-11-26 07:38:12.993167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:44.907 [2024-11-26 07:38:12.993182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:44.907 qpair failed and we were unable to recover it. 00:28:45.167 [2024-11-26 07:38:13.003081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.003135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.003149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.003155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.003161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.003175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.013112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.013196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.013210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.013216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.013222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.013237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.023133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.023190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.023203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.023210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.023216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.023230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.033208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.033269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.033283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.033290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.033299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.033314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.043186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.043244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.043258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.043264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.043270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.043285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.053233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.053309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.053323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.053330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.053336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.053351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.063232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.063290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.063304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.063310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.063316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.063331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.073206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.073260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.073275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.073282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.073288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.073303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.083224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.083328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.083342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.083349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.083355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.083370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.093306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.093362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.093377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.093384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.093390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.093405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.103367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.103424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.103439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.103445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.103451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.103466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.113381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.113441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.113455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.113461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.113467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.113482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.168 qpair failed and we were unable to recover it. 00:28:45.168 [2024-11-26 07:38:13.123464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.168 [2024-11-26 07:38:13.123517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.168 [2024-11-26 07:38:13.123535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.168 [2024-11-26 07:38:13.123542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.168 [2024-11-26 07:38:13.123548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.168 [2024-11-26 07:38:13.123562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.133458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.133511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.133526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.133532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.133538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.133553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.143546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.143604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.143618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.143625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.143631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.143646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.153517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.153574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.153588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.153595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.153601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.153617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.163568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.163623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.163637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.163644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.163655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.163670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.173567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.173622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.173636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.173642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.173648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.173663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.183675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.183771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.183784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.183791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.183797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.183812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.193637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.193690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.193705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.193713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.193719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.193734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.203675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.203728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.203743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.203749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.203755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.203770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.213740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.213792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.213806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.213813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.213819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.213834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.223781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.223855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.223870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.223877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.223883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.223897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.233701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.233759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.233773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.233780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.233786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.233801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.243802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.243854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.243869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.243875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.243881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.243896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.169 [2024-11-26 07:38:13.253808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.169 [2024-11-26 07:38:13.253863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.169 [2024-11-26 07:38:13.253877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.169 [2024-11-26 07:38:13.253883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.169 [2024-11-26 07:38:13.253889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.169 [2024-11-26 07:38:13.253904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.169 qpair failed and we were unable to recover it. 00:28:45.430 [2024-11-26 07:38:13.263848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.430 [2024-11-26 07:38:13.263904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.430 [2024-11-26 07:38:13.263918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.430 [2024-11-26 07:38:13.263924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.430 [2024-11-26 07:38:13.263930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.430 [2024-11-26 07:38:13.263944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.430 qpair failed and we were unable to recover it. 00:28:45.430 [2024-11-26 07:38:13.273800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.430 [2024-11-26 07:38:13.273862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.430 [2024-11-26 07:38:13.273876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.430 [2024-11-26 07:38:13.273884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.430 [2024-11-26 07:38:13.273889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.430 [2024-11-26 07:38:13.273904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.430 qpair failed and we were unable to recover it. 00:28:45.430 [2024-11-26 07:38:13.283891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.430 [2024-11-26 07:38:13.283951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.430 [2024-11-26 07:38:13.283966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.430 [2024-11-26 07:38:13.283973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.430 [2024-11-26 07:38:13.283979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.430 [2024-11-26 07:38:13.283993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.430 qpair failed and we were unable to recover it. 00:28:45.430 [2024-11-26 07:38:13.293909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.430 [2024-11-26 07:38:13.293970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.430 [2024-11-26 07:38:13.293985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.430 [2024-11-26 07:38:13.293995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.430 [2024-11-26 07:38:13.294001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.430 [2024-11-26 07:38:13.294017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.430 qpair failed and we were unable to recover it. 00:28:45.430 [2024-11-26 07:38:13.303982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.430 [2024-11-26 07:38:13.304087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.430 [2024-11-26 07:38:13.304101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.430 [2024-11-26 07:38:13.304107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.304114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.304129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-11-26 07:38:13.313987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.431 [2024-11-26 07:38:13.314046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.431 [2024-11-26 07:38:13.314060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.431 [2024-11-26 07:38:13.314067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.314073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.314087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-11-26 07:38:13.324015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.431 [2024-11-26 07:38:13.324094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.431 [2024-11-26 07:38:13.324109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.431 [2024-11-26 07:38:13.324115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.324121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.324135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-11-26 07:38:13.334030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.431 [2024-11-26 07:38:13.334089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.431 [2024-11-26 07:38:13.334102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.431 [2024-11-26 07:38:13.334109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.334115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.334133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-11-26 07:38:13.344070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.431 [2024-11-26 07:38:13.344126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.431 [2024-11-26 07:38:13.344140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.431 [2024-11-26 07:38:13.344146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.344152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.344167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-11-26 07:38:13.354116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.431 [2024-11-26 07:38:13.354201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.431 [2024-11-26 07:38:13.354215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.431 [2024-11-26 07:38:13.354222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.354228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.354244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-11-26 07:38:13.364126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.431 [2024-11-26 07:38:13.364178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.431 [2024-11-26 07:38:13.364192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.431 [2024-11-26 07:38:13.364198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.364205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.364219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-11-26 07:38:13.374148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.431 [2024-11-26 07:38:13.374206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.431 [2024-11-26 07:38:13.374220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.431 [2024-11-26 07:38:13.374227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.374233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.374248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-11-26 07:38:13.384195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.431 [2024-11-26 07:38:13.384256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.431 [2024-11-26 07:38:13.384270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.431 [2024-11-26 07:38:13.384276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.384282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.384297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-11-26 07:38:13.394176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.431 [2024-11-26 07:38:13.394273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.431 [2024-11-26 07:38:13.394287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.431 [2024-11-26 07:38:13.394294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.394300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.394316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-11-26 07:38:13.404291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.431 [2024-11-26 07:38:13.404346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.431 [2024-11-26 07:38:13.404361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.431 [2024-11-26 07:38:13.404368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.404373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.404389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.431 [2024-11-26 07:38:13.414261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.431 [2024-11-26 07:38:13.414316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.431 [2024-11-26 07:38:13.414330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.431 [2024-11-26 07:38:13.414337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.431 [2024-11-26 07:38:13.414343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.431 [2024-11-26 07:38:13.414358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.431 qpair failed and we were unable to recover it. 00:28:45.432 [2024-11-26 07:38:13.424315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.432 [2024-11-26 07:38:13.424391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.432 [2024-11-26 07:38:13.424405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.432 [2024-11-26 07:38:13.424415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.432 [2024-11-26 07:38:13.424420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.432 [2024-11-26 07:38:13.424436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-11-26 07:38:13.434334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.432 [2024-11-26 07:38:13.434391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.432 [2024-11-26 07:38:13.434404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.432 [2024-11-26 07:38:13.434411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.432 [2024-11-26 07:38:13.434417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.432 [2024-11-26 07:38:13.434431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-11-26 07:38:13.444362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.432 [2024-11-26 07:38:13.444414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.432 [2024-11-26 07:38:13.444427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.432 [2024-11-26 07:38:13.444434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.432 [2024-11-26 07:38:13.444440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.432 [2024-11-26 07:38:13.444454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-11-26 07:38:13.454381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.432 [2024-11-26 07:38:13.454436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.432 [2024-11-26 07:38:13.454449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.432 [2024-11-26 07:38:13.454455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.432 [2024-11-26 07:38:13.454461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.432 [2024-11-26 07:38:13.454476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-11-26 07:38:13.464412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.432 [2024-11-26 07:38:13.464474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.432 [2024-11-26 07:38:13.464488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.432 [2024-11-26 07:38:13.464495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.432 [2024-11-26 07:38:13.464501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.432 [2024-11-26 07:38:13.464519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-11-26 07:38:13.474473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.432 [2024-11-26 07:38:13.474533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.432 [2024-11-26 07:38:13.474548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.432 [2024-11-26 07:38:13.474555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.432 [2024-11-26 07:38:13.474561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.432 [2024-11-26 07:38:13.474576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-11-26 07:38:13.484498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.432 [2024-11-26 07:38:13.484555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.432 [2024-11-26 07:38:13.484569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.432 [2024-11-26 07:38:13.484576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.432 [2024-11-26 07:38:13.484582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.432 [2024-11-26 07:38:13.484596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-11-26 07:38:13.494433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.432 [2024-11-26 07:38:13.494487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.432 [2024-11-26 07:38:13.494502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.432 [2024-11-26 07:38:13.494509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.432 [2024-11-26 07:38:13.494515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.432 [2024-11-26 07:38:13.494530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-11-26 07:38:13.504535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.432 [2024-11-26 07:38:13.504596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.432 [2024-11-26 07:38:13.504610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.432 [2024-11-26 07:38:13.504617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.432 [2024-11-26 07:38:13.504623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.432 [2024-11-26 07:38:13.504640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.432 [2024-11-26 07:38:13.514543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.432 [2024-11-26 07:38:13.514595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.432 [2024-11-26 07:38:13.514609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.432 [2024-11-26 07:38:13.514615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.432 [2024-11-26 07:38:13.514621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.432 [2024-11-26 07:38:13.514636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.432 qpair failed and we were unable to recover it. 00:28:45.693 [2024-11-26 07:38:13.524511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.693 [2024-11-26 07:38:13.524568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.693 [2024-11-26 07:38:13.524582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.693 [2024-11-26 07:38:13.524589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.693 [2024-11-26 07:38:13.524595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.693 [2024-11-26 07:38:13.524611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-11-26 07:38:13.534548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.693 [2024-11-26 07:38:13.534603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.693 [2024-11-26 07:38:13.534617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.693 [2024-11-26 07:38:13.534624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.693 [2024-11-26 07:38:13.534630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.693 [2024-11-26 07:38:13.534645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-11-26 07:38:13.544636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.693 [2024-11-26 07:38:13.544691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.693 [2024-11-26 07:38:13.544705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.693 [2024-11-26 07:38:13.544712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.693 [2024-11-26 07:38:13.544718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.693 [2024-11-26 07:38:13.544732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-11-26 07:38:13.554675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.693 [2024-11-26 07:38:13.554743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.693 [2024-11-26 07:38:13.554760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.693 [2024-11-26 07:38:13.554767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.693 [2024-11-26 07:38:13.554773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.693 [2024-11-26 07:38:13.554787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-11-26 07:38:13.564678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.693 [2024-11-26 07:38:13.564754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.693 [2024-11-26 07:38:13.564767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.693 [2024-11-26 07:38:13.564774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.693 [2024-11-26 07:38:13.564779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.693 [2024-11-26 07:38:13.564794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-11-26 07:38:13.574748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.693 [2024-11-26 07:38:13.574802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.693 [2024-11-26 07:38:13.574816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.693 [2024-11-26 07:38:13.574822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.693 [2024-11-26 07:38:13.574829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.574843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.584701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.584762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.584775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.584782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.584788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.584802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.594758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.594835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.594849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.594856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.594865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.594880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.604808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.604864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.604878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.604885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.604891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.604905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.614835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.614891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.614905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.614912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.614918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.614933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.624844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.624933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.624950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.624958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.624964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.624979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.634900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.634958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.634972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.634980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.634986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.635001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.644926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.644982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.644996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.645003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.645009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.645023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.654879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.654931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.654945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.654955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.654961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.654976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.665027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.665085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.665099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.665106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.665112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.665127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.675012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.675067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.675080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.675087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.675093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.675107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.684976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.685032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.685049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.685056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.685062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.685077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.695066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.695126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.695140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.695147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.695153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.695168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-11-26 07:38:13.705135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.694 [2024-11-26 07:38:13.705207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.694 [2024-11-26 07:38:13.705221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.694 [2024-11-26 07:38:13.705227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.694 [2024-11-26 07:38:13.705233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.694 [2024-11-26 07:38:13.705248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-11-26 07:38:13.715169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.695 [2024-11-26 07:38:13.715222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.695 [2024-11-26 07:38:13.715236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.695 [2024-11-26 07:38:13.715243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.695 [2024-11-26 07:38:13.715249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.695 [2024-11-26 07:38:13.715264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-11-26 07:38:13.725147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.695 [2024-11-26 07:38:13.725203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.695 [2024-11-26 07:38:13.725217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.695 [2024-11-26 07:38:13.725224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.695 [2024-11-26 07:38:13.725234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.695 [2024-11-26 07:38:13.725249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-11-26 07:38:13.735161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.695 [2024-11-26 07:38:13.735219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.695 [2024-11-26 07:38:13.735233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.695 [2024-11-26 07:38:13.735239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.695 [2024-11-26 07:38:13.735246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.695 [2024-11-26 07:38:13.735261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-11-26 07:38:13.745257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.695 [2024-11-26 07:38:13.745320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.695 [2024-11-26 07:38:13.745335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.695 [2024-11-26 07:38:13.745342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.695 [2024-11-26 07:38:13.745348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.695 [2024-11-26 07:38:13.745363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-11-26 07:38:13.755250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.695 [2024-11-26 07:38:13.755352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.695 [2024-11-26 07:38:13.755366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.695 [2024-11-26 07:38:13.755372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.695 [2024-11-26 07:38:13.755378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.695 [2024-11-26 07:38:13.755393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-11-26 07:38:13.765199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.695 [2024-11-26 07:38:13.765269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.695 [2024-11-26 07:38:13.765283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.695 [2024-11-26 07:38:13.765290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.695 [2024-11-26 07:38:13.765296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.695 [2024-11-26 07:38:13.765311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-11-26 07:38:13.775323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.695 [2024-11-26 07:38:13.775378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.695 [2024-11-26 07:38:13.775392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.695 [2024-11-26 07:38:13.775399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.695 [2024-11-26 07:38:13.775405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.695 [2024-11-26 07:38:13.775419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-11-26 07:38:13.785269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.695 [2024-11-26 07:38:13.785330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.695 [2024-11-26 07:38:13.785344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.695 [2024-11-26 07:38:13.785351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.695 [2024-11-26 07:38:13.785357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.695 [2024-11-26 07:38:13.785371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.955 [2024-11-26 07:38:13.795379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.955 [2024-11-26 07:38:13.795457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.955 [2024-11-26 07:38:13.795472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.955 [2024-11-26 07:38:13.795479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.955 [2024-11-26 07:38:13.795484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.955 [2024-11-26 07:38:13.795500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.955 qpair failed and we were unable to recover it. 00:28:45.955 [2024-11-26 07:38:13.805392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.955 [2024-11-26 07:38:13.805444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.955 [2024-11-26 07:38:13.805458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.955 [2024-11-26 07:38:13.805465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.955 [2024-11-26 07:38:13.805470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.955 [2024-11-26 07:38:13.805485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.955 qpair failed and we were unable to recover it. 00:28:45.955 [2024-11-26 07:38:13.815404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.955 [2024-11-26 07:38:13.815460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.955 [2024-11-26 07:38:13.815474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.955 [2024-11-26 07:38:13.815481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.955 [2024-11-26 07:38:13.815487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.955 [2024-11-26 07:38:13.815501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.955 qpair failed and we were unable to recover it. 00:28:45.955 [2024-11-26 07:38:13.825507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.955 [2024-11-26 07:38:13.825604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.955 [2024-11-26 07:38:13.825619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.955 [2024-11-26 07:38:13.825625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.955 [2024-11-26 07:38:13.825631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.955 [2024-11-26 07:38:13.825646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.955 qpair failed and we were unable to recover it. 00:28:45.955 [2024-11-26 07:38:13.835472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.955 [2024-11-26 07:38:13.835523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.955 [2024-11-26 07:38:13.835537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.955 [2024-11-26 07:38:13.835544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.955 [2024-11-26 07:38:13.835550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.955 [2024-11-26 07:38:13.835565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.955 qpair failed and we were unable to recover it. 00:28:45.955 [2024-11-26 07:38:13.845530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.955 [2024-11-26 07:38:13.845593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.955 [2024-11-26 07:38:13.845607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.955 [2024-11-26 07:38:13.845613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.955 [2024-11-26 07:38:13.845619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.955 [2024-11-26 07:38:13.845634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.955 qpair failed and we were unable to recover it. 00:28:45.955 [2024-11-26 07:38:13.855526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.955 [2024-11-26 07:38:13.855585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.955 [2024-11-26 07:38:13.855598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.955 [2024-11-26 07:38:13.855608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.955 [2024-11-26 07:38:13.855614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.955 [2024-11-26 07:38:13.855629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.955 qpair failed and we were unable to recover it. 00:28:45.955 [2024-11-26 07:38:13.865566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.955 [2024-11-26 07:38:13.865621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.955 [2024-11-26 07:38:13.865635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.955 [2024-11-26 07:38:13.865642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.955 [2024-11-26 07:38:13.865648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.955 [2024-11-26 07:38:13.865662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.955 qpair failed and we were unable to recover it. 00:28:45.955 [2024-11-26 07:38:13.875593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.955 [2024-11-26 07:38:13.875650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.955 [2024-11-26 07:38:13.875664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.955 [2024-11-26 07:38:13.875671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.955 [2024-11-26 07:38:13.875677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.875692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.885617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.885674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.885688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.885694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.885700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.885715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.895638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.895694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.895709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.895717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.895722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.895741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.905691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.905746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.905760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.905767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.905773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.905787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.915715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.915777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.915791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.915797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.915803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.915819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.925734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.925786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.925800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.925806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.925812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.925826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.935753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.935807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.935821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.935828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.935834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.935849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.945804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.945879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.945893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.945899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.945905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.945920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.955826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.955885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.955899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.955906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.955912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.955927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.965843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.965898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.965912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.965918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.965924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.965939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.975904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.976003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.976018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.976025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.976031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.976046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.985939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.986002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.986016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.986028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.986034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.986049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:13.995932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:13.995993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:13.996008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:13.996015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:13.996021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:13.996036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.956 qpair failed and we were unable to recover it. 00:28:45.956 [2024-11-26 07:38:14.005951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.956 [2024-11-26 07:38:14.006007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.956 [2024-11-26 07:38:14.006022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.956 [2024-11-26 07:38:14.006029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.956 [2024-11-26 07:38:14.006035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.956 [2024-11-26 07:38:14.006050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.957 qpair failed and we were unable to recover it. 00:28:45.957 [2024-11-26 07:38:14.015983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.957 [2024-11-26 07:38:14.016041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.957 [2024-11-26 07:38:14.016055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.957 [2024-11-26 07:38:14.016061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.957 [2024-11-26 07:38:14.016067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.957 [2024-11-26 07:38:14.016082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.957 qpair failed and we were unable to recover it. 00:28:45.957 [2024-11-26 07:38:14.026034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.957 [2024-11-26 07:38:14.026109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.957 [2024-11-26 07:38:14.026124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.957 [2024-11-26 07:38:14.026130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.957 [2024-11-26 07:38:14.026136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.957 [2024-11-26 07:38:14.026155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.957 qpair failed and we were unable to recover it. 00:28:45.957 [2024-11-26 07:38:14.036086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.957 [2024-11-26 07:38:14.036143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.957 [2024-11-26 07:38:14.036157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.957 [2024-11-26 07:38:14.036163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.957 [2024-11-26 07:38:14.036169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.957 [2024-11-26 07:38:14.036184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.957 qpair failed and we were unable to recover it. 00:28:45.957 [2024-11-26 07:38:14.046136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:45.957 [2024-11-26 07:38:14.046196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:45.957 [2024-11-26 07:38:14.046210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:45.957 [2024-11-26 07:38:14.046217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:45.957 [2024-11-26 07:38:14.046223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:45.957 [2024-11-26 07:38:14.046238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.957 qpair failed and we were unable to recover it. 00:28:46.217 [2024-11-26 07:38:14.056104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.217 [2024-11-26 07:38:14.056188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.217 [2024-11-26 07:38:14.056202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.217 [2024-11-26 07:38:14.056209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.217 [2024-11-26 07:38:14.056215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.217 [2024-11-26 07:38:14.056229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.217 qpair failed and we were unable to recover it. 00:28:46.217 [2024-11-26 07:38:14.066168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.217 [2024-11-26 07:38:14.066228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.217 [2024-11-26 07:38:14.066241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.217 [2024-11-26 07:38:14.066248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.217 [2024-11-26 07:38:14.066254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.217 [2024-11-26 07:38:14.066268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.217 qpair failed and we were unable to recover it. 00:28:46.217 [2024-11-26 07:38:14.076201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.217 [2024-11-26 07:38:14.076257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.217 [2024-11-26 07:38:14.076271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.217 [2024-11-26 07:38:14.076277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.217 [2024-11-26 07:38:14.076283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.217 [2024-11-26 07:38:14.076297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.217 qpair failed and we were unable to recover it. 00:28:46.217 [2024-11-26 07:38:14.086205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.217 [2024-11-26 07:38:14.086254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.217 [2024-11-26 07:38:14.086268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.217 [2024-11-26 07:38:14.086275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.217 [2024-11-26 07:38:14.086280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.217 [2024-11-26 07:38:14.086295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.217 qpair failed and we were unable to recover it. 00:28:46.217 [2024-11-26 07:38:14.096213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.217 [2024-11-26 07:38:14.096267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.096282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.096289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.096295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.096310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.106276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.106368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.106383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.106389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.106395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.106410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.116284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.116341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.116357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.116364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.116370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.116384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.126232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.126289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.126303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.126309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.126315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.126330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.136388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.136445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.136459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.136466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.136472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.136487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.146300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.146359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.146373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.146380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.146386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.146401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.156320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.156378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.156393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.156399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.156409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.156424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.166345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.166400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.166414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.166421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.166427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.166442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.176405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.176479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.176493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.176500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.176506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.176521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.186477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.186536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.186550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.186557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.186563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.186578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.196500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.196559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.196574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.196581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.196587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.196602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.206523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.206590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.206605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.206611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.206617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.206631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.216521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.216597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.216611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.218 [2024-11-26 07:38:14.216617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.218 [2024-11-26 07:38:14.216623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.218 [2024-11-26 07:38:14.216638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.218 qpair failed and we were unable to recover it. 00:28:46.218 [2024-11-26 07:38:14.226580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.218 [2024-11-26 07:38:14.226638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.218 [2024-11-26 07:38:14.226653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.219 [2024-11-26 07:38:14.226660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.219 [2024-11-26 07:38:14.226666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.219 [2024-11-26 07:38:14.226680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.219 qpair failed and we were unable to recover it. 00:28:46.219 [2024-11-26 07:38:14.236654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.219 [2024-11-26 07:38:14.236743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.219 [2024-11-26 07:38:14.236756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.219 [2024-11-26 07:38:14.236763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.219 [2024-11-26 07:38:14.236769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.219 [2024-11-26 07:38:14.236784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.219 qpair failed and we were unable to recover it. 00:28:46.219 [2024-11-26 07:38:14.246648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.219 [2024-11-26 07:38:14.246748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.219 [2024-11-26 07:38:14.246765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.219 [2024-11-26 07:38:14.246771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.219 [2024-11-26 07:38:14.246777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.219 [2024-11-26 07:38:14.246792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.219 qpair failed and we were unable to recover it. 00:28:46.219 [2024-11-26 07:38:14.256681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.219 [2024-11-26 07:38:14.256734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.219 [2024-11-26 07:38:14.256747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.219 [2024-11-26 07:38:14.256754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.219 [2024-11-26 07:38:14.256760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.219 [2024-11-26 07:38:14.256774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.219 qpair failed and we were unable to recover it. 00:28:46.219 [2024-11-26 07:38:14.266678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.219 [2024-11-26 07:38:14.266751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.219 [2024-11-26 07:38:14.266765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.219 [2024-11-26 07:38:14.266772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.219 [2024-11-26 07:38:14.266778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.219 [2024-11-26 07:38:14.266792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.219 qpair failed and we were unable to recover it. 00:28:46.219 [2024-11-26 07:38:14.276685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.219 [2024-11-26 07:38:14.276777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.219 [2024-11-26 07:38:14.276791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.219 [2024-11-26 07:38:14.276798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.219 [2024-11-26 07:38:14.276803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.219 [2024-11-26 07:38:14.276818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.219 qpair failed and we were unable to recover it. 00:28:46.219 [2024-11-26 07:38:14.286806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.219 [2024-11-26 07:38:14.286896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.219 [2024-11-26 07:38:14.286910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.219 [2024-11-26 07:38:14.286917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.219 [2024-11-26 07:38:14.286926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.219 [2024-11-26 07:38:14.286941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.219 qpair failed and we were unable to recover it. 00:28:46.219 [2024-11-26 07:38:14.296825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.219 [2024-11-26 07:38:14.296880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.219 [2024-11-26 07:38:14.296895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.219 [2024-11-26 07:38:14.296902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.219 [2024-11-26 07:38:14.296908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.219 [2024-11-26 07:38:14.296922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.219 qpair failed and we were unable to recover it. 00:28:46.219 [2024-11-26 07:38:14.306817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.219 [2024-11-26 07:38:14.306873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.219 [2024-11-26 07:38:14.306888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.219 [2024-11-26 07:38:14.306894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.219 [2024-11-26 07:38:14.306900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.219 [2024-11-26 07:38:14.306915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.219 qpair failed and we were unable to recover it. 00:28:46.480 [2024-11-26 07:38:14.316811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.480 [2024-11-26 07:38:14.316881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.480 [2024-11-26 07:38:14.316895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.480 [2024-11-26 07:38:14.316901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.480 [2024-11-26 07:38:14.316907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.480 [2024-11-26 07:38:14.316922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.480 qpair failed and we were unable to recover it. 00:28:46.480 [2024-11-26 07:38:14.326903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.480 [2024-11-26 07:38:14.326962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.480 [2024-11-26 07:38:14.326976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.480 [2024-11-26 07:38:14.326983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.480 [2024-11-26 07:38:14.326989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.480 [2024-11-26 07:38:14.327005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.480 qpair failed and we were unable to recover it. 00:28:46.480 [2024-11-26 07:38:14.336964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.480 [2024-11-26 07:38:14.337020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.480 [2024-11-26 07:38:14.337034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.480 [2024-11-26 07:38:14.337041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.480 [2024-11-26 07:38:14.337047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.480 [2024-11-26 07:38:14.337061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.480 qpair failed and we were unable to recover it. 00:28:46.480 [2024-11-26 07:38:14.346998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.480 [2024-11-26 07:38:14.347057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.480 [2024-11-26 07:38:14.347070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.480 [2024-11-26 07:38:14.347077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.480 [2024-11-26 07:38:14.347083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.480 [2024-11-26 07:38:14.347097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.480 qpair failed and we were unable to recover it. 00:28:46.480 [2024-11-26 07:38:14.357023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.480 [2024-11-26 07:38:14.357085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.480 [2024-11-26 07:38:14.357099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.480 [2024-11-26 07:38:14.357105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.480 [2024-11-26 07:38:14.357111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.480 [2024-11-26 07:38:14.357125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.480 qpair failed and we were unable to recover it. 00:28:46.480 [2024-11-26 07:38:14.366996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.480 [2024-11-26 07:38:14.367084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.480 [2024-11-26 07:38:14.367098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.480 [2024-11-26 07:38:14.367105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.480 [2024-11-26 07:38:14.367111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.480 [2024-11-26 07:38:14.367125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.480 qpair failed and we were unable to recover it. 00:28:46.480 [2024-11-26 07:38:14.376980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.480 [2024-11-26 07:38:14.377036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.480 [2024-11-26 07:38:14.377049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.480 [2024-11-26 07:38:14.377056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.480 [2024-11-26 07:38:14.377062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.480 [2024-11-26 07:38:14.377077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.480 qpair failed and we were unable to recover it. 00:28:46.480 [2024-11-26 07:38:14.387059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.480 [2024-11-26 07:38:14.387118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.480 [2024-11-26 07:38:14.387132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.480 [2024-11-26 07:38:14.387139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.480 [2024-11-26 07:38:14.387144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.480 [2024-11-26 07:38:14.387159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.480 qpair failed and we were unable to recover it. 00:28:46.480 [2024-11-26 07:38:14.397134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.480 [2024-11-26 07:38:14.397216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.480 [2024-11-26 07:38:14.397230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.480 [2024-11-26 07:38:14.397237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.480 [2024-11-26 07:38:14.397243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.480 [2024-11-26 07:38:14.397258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.480 qpair failed and we were unable to recover it. 00:28:46.480 [2024-11-26 07:38:14.407123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.480 [2024-11-26 07:38:14.407178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.480 [2024-11-26 07:38:14.407192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.480 [2024-11-26 07:38:14.407198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.480 [2024-11-26 07:38:14.407204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.480 [2024-11-26 07:38:14.407219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.480 qpair failed and we were unable to recover it. 00:28:46.480 [2024-11-26 07:38:14.417136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.417194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.417208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.417218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.417224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.417238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.427139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.427196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.427210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.427216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.427222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.427237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.437157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.437215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.437229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.437236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.437243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.437257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.447235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.447288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.447302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.447309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.447315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.447329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.457210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.457278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.457292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.457298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.457304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.457322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.467298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.467363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.467377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.467384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.467390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.467405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.477311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.477369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.477384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.477391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.477397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.477412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.487346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.487398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.487412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.487418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.487424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.487439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.497421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.497503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.497518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.497525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.497531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.497546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.507366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.507430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.507444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.507450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.507456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.507471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.517438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.517496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.517509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.517515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.517521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.517536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.527465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.527547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.527560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.527567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.527573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.527587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.537482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.537538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.481 [2024-11-26 07:38:14.537552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.481 [2024-11-26 07:38:14.537559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.481 [2024-11-26 07:38:14.537565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.481 [2024-11-26 07:38:14.537579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.481 qpair failed and we were unable to recover it. 00:28:46.481 [2024-11-26 07:38:14.547481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.481 [2024-11-26 07:38:14.547538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.482 [2024-11-26 07:38:14.547556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.482 [2024-11-26 07:38:14.547563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.482 [2024-11-26 07:38:14.547569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.482 [2024-11-26 07:38:14.547584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.482 qpair failed and we were unable to recover it. 00:28:46.482 [2024-11-26 07:38:14.557498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.482 [2024-11-26 07:38:14.557557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.482 [2024-11-26 07:38:14.557570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.482 [2024-11-26 07:38:14.557577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.482 [2024-11-26 07:38:14.557583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.482 [2024-11-26 07:38:14.557598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.482 qpair failed and we were unable to recover it. 00:28:46.482 [2024-11-26 07:38:14.567521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.482 [2024-11-26 07:38:14.567575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.482 [2024-11-26 07:38:14.567588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.482 [2024-11-26 07:38:14.567595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.482 [2024-11-26 07:38:14.567601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.482 [2024-11-26 07:38:14.567616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.482 qpair failed and we were unable to recover it. 00:28:46.742 [2024-11-26 07:38:14.577587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.742 [2024-11-26 07:38:14.577683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.742 [2024-11-26 07:38:14.577697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.742 [2024-11-26 07:38:14.577704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.742 [2024-11-26 07:38:14.577709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.742 [2024-11-26 07:38:14.577725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.742 qpair failed and we were unable to recover it. 00:28:46.742 [2024-11-26 07:38:14.587643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.742 [2024-11-26 07:38:14.587701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.742 [2024-11-26 07:38:14.587715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.742 [2024-11-26 07:38:14.587721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.742 [2024-11-26 07:38:14.587727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.742 [2024-11-26 07:38:14.587745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.742 qpair failed and we were unable to recover it. 00:28:46.742 [2024-11-26 07:38:14.597735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.742 [2024-11-26 07:38:14.597787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.742 [2024-11-26 07:38:14.597801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.742 [2024-11-26 07:38:14.597808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.742 [2024-11-26 07:38:14.597814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.742 [2024-11-26 07:38:14.597829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.742 qpair failed and we were unable to recover it. 00:28:46.742 [2024-11-26 07:38:14.607730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.742 [2024-11-26 07:38:14.607781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.742 [2024-11-26 07:38:14.607795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.742 [2024-11-26 07:38:14.607802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.607808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.607822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.617719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.617798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.617813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.617819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.617825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.617840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.627767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.627823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.627837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.627844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.627850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.627865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.637839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.637900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.637914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.637920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.637926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.637940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.647867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.647961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.647975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.647982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.647988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.648002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.657864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.657921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.657934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.657941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.657950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.657966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.667934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.668042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.668057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.668063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.668069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.668084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.677922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.677986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.678004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.678010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.678016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.678031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.687945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.688012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.688026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.688032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.688038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.688052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.697978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.698033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.698047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.698053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.698059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.698074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.708031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.708100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.708114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.708121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.708127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.708142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.718108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.718168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.718182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.718188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.718199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.718215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.728051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.728110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.728125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.728132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.743 [2024-11-26 07:38:14.728137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.743 [2024-11-26 07:38:14.728153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.743 qpair failed and we were unable to recover it. 00:28:46.743 [2024-11-26 07:38:14.738071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.743 [2024-11-26 07:38:14.738128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.743 [2024-11-26 07:38:14.738142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.743 [2024-11-26 07:38:14.738149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.744 [2024-11-26 07:38:14.738155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.744 [2024-11-26 07:38:14.738169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.744 qpair failed and we were unable to recover it. 00:28:46.744 [2024-11-26 07:38:14.748120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.744 [2024-11-26 07:38:14.748180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.744 [2024-11-26 07:38:14.748197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.744 [2024-11-26 07:38:14.748204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.744 [2024-11-26 07:38:14.748211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.744 [2024-11-26 07:38:14.748226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.744 qpair failed and we were unable to recover it. 00:28:46.744 [2024-11-26 07:38:14.758192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.744 [2024-11-26 07:38:14.758248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.744 [2024-11-26 07:38:14.758262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.744 [2024-11-26 07:38:14.758269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.744 [2024-11-26 07:38:14.758275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.744 [2024-11-26 07:38:14.758290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.744 qpair failed and we were unable to recover it. 00:28:46.744 [2024-11-26 07:38:14.768112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.744 [2024-11-26 07:38:14.768205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.744 [2024-11-26 07:38:14.768219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.744 [2024-11-26 07:38:14.768226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.744 [2024-11-26 07:38:14.768232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.744 [2024-11-26 07:38:14.768247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.744 qpair failed and we were unable to recover it. 00:28:46.744 [2024-11-26 07:38:14.778171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.744 [2024-11-26 07:38:14.778222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.744 [2024-11-26 07:38:14.778237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.744 [2024-11-26 07:38:14.778243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.744 [2024-11-26 07:38:14.778250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.744 [2024-11-26 07:38:14.778264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.744 qpair failed and we were unable to recover it. 00:28:46.744 [2024-11-26 07:38:14.788234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.744 [2024-11-26 07:38:14.788291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.744 [2024-11-26 07:38:14.788305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.744 [2024-11-26 07:38:14.788312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.744 [2024-11-26 07:38:14.788318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.744 [2024-11-26 07:38:14.788332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.744 qpair failed and we were unable to recover it. 00:28:46.744 [2024-11-26 07:38:14.798267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.744 [2024-11-26 07:38:14.798323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.744 [2024-11-26 07:38:14.798339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.744 [2024-11-26 07:38:14.798346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.744 [2024-11-26 07:38:14.798352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.744 [2024-11-26 07:38:14.798368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.744 qpair failed and we were unable to recover it. 00:28:46.744 [2024-11-26 07:38:14.808277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.744 [2024-11-26 07:38:14.808334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.744 [2024-11-26 07:38:14.808351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.744 [2024-11-26 07:38:14.808358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.744 [2024-11-26 07:38:14.808364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.744 [2024-11-26 07:38:14.808378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.744 qpair failed and we were unable to recover it. 00:28:46.744 [2024-11-26 07:38:14.818329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.744 [2024-11-26 07:38:14.818387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.744 [2024-11-26 07:38:14.818401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.744 [2024-11-26 07:38:14.818408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.744 [2024-11-26 07:38:14.818413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.744 [2024-11-26 07:38:14.818428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.744 qpair failed and we were unable to recover it. 00:28:46.744 [2024-11-26 07:38:14.828347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:46.744 [2024-11-26 07:38:14.828403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:46.744 [2024-11-26 07:38:14.828417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:46.744 [2024-11-26 07:38:14.828423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:46.744 [2024-11-26 07:38:14.828429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:46.744 [2024-11-26 07:38:14.828443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:46.744 qpair failed and we were unable to recover it. 00:28:47.003 [2024-11-26 07:38:14.838375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.003 [2024-11-26 07:38:14.838431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.003 [2024-11-26 07:38:14.838444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.003 [2024-11-26 07:38:14.838451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.003 [2024-11-26 07:38:14.838457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.003 [2024-11-26 07:38:14.838471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.003 qpair failed and we were unable to recover it. 00:28:47.003 [2024-11-26 07:38:14.848392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.003 [2024-11-26 07:38:14.848472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.003 [2024-11-26 07:38:14.848486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.003 [2024-11-26 07:38:14.848496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.003 [2024-11-26 07:38:14.848502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.003 [2024-11-26 07:38:14.848516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.003 qpair failed and we were unable to recover it. 00:28:47.003 [2024-11-26 07:38:14.858421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.003 [2024-11-26 07:38:14.858476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.003 [2024-11-26 07:38:14.858490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.003 [2024-11-26 07:38:14.858496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.003 [2024-11-26 07:38:14.858502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.003 [2024-11-26 07:38:14.858517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.003 qpair failed and we were unable to recover it. 00:28:47.003 [2024-11-26 07:38:14.868464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.003 [2024-11-26 07:38:14.868520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.003 [2024-11-26 07:38:14.868533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.003 [2024-11-26 07:38:14.868540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.003 [2024-11-26 07:38:14.868546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.003 [2024-11-26 07:38:14.868561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.003 qpair failed and we were unable to recover it. 00:28:47.003 [2024-11-26 07:38:14.878488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.003 [2024-11-26 07:38:14.878591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.003 [2024-11-26 07:38:14.878605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.003 [2024-11-26 07:38:14.878611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.003 [2024-11-26 07:38:14.878617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.003 [2024-11-26 07:38:14.878632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.003 qpair failed and we were unable to recover it. 00:28:47.003 [2024-11-26 07:38:14.888502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.003 [2024-11-26 07:38:14.888563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.003 [2024-11-26 07:38:14.888577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.003 [2024-11-26 07:38:14.888584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.003 [2024-11-26 07:38:14.888590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.003 [2024-11-26 07:38:14.888605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.003 qpair failed and we were unable to recover it. 00:28:47.003 [2024-11-26 07:38:14.898565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.003 [2024-11-26 07:38:14.898619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.003 [2024-11-26 07:38:14.898633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:14.898640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:14.898646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:14.898661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:14.908570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:14.908624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:14.908638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:14.908644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:14.908650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:14.908665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:14.918652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:14.918726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:14.918739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:14.918746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:14.918752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:14.918766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:14.928631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:14.928697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:14.928711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:14.928717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:14.928723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:14.928738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:14.938653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:14.938712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:14.938726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:14.938733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:14.938739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:14.938753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:14.948682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:14.948738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:14.948752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:14.948759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:14.948764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:14.948779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:14.958750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:14.958814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:14.958828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:14.958835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:14.958841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:14.958856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:14.968756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:14.968857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:14.968871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:14.968877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:14.968883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:14.968898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:14.978778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:14.978848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:14.978863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:14.978873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:14.978879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:14.978894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:14.988810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:14.988866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:14.988881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:14.988888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:14.988894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:14.988909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:14.998844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:14.998943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:14.998961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:14.998967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:14.998973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:14.998988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:15.008892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:15.008962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:15.008977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:15.008984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:15.008990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:15.009005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:15.018955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:15.019036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:15.019050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:15.019056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:15.019062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:15.019081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:15.028944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:15.029006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:15.029020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:15.029027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:15.029033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.004 [2024-11-26 07:38:15.029047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.004 qpair failed and we were unable to recover it. 00:28:47.004 [2024-11-26 07:38:15.038951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.004 [2024-11-26 07:38:15.039026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.004 [2024-11-26 07:38:15.039039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.004 [2024-11-26 07:38:15.039046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.004 [2024-11-26 07:38:15.039051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.005 [2024-11-26 07:38:15.039066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.005 qpair failed and we were unable to recover it. 00:28:47.005 [2024-11-26 07:38:15.049005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.005 [2024-11-26 07:38:15.049073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.005 [2024-11-26 07:38:15.049087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.005 [2024-11-26 07:38:15.049094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.005 [2024-11-26 07:38:15.049100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.005 [2024-11-26 07:38:15.049115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.005 qpair failed and we were unable to recover it. 00:28:47.005 [2024-11-26 07:38:15.059042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.005 [2024-11-26 07:38:15.059097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.005 [2024-11-26 07:38:15.059111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.005 [2024-11-26 07:38:15.059118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.005 [2024-11-26 07:38:15.059124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.005 [2024-11-26 07:38:15.059138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.005 qpair failed and we were unable to recover it. 00:28:47.005 [2024-11-26 07:38:15.069076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.005 [2024-11-26 07:38:15.069189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.005 [2024-11-26 07:38:15.069203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.005 [2024-11-26 07:38:15.069210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.005 [2024-11-26 07:38:15.069216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.005 [2024-11-26 07:38:15.069231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.005 qpair failed and we were unable to recover it. 00:28:47.005 [2024-11-26 07:38:15.079094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.005 [2024-11-26 07:38:15.079162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.005 [2024-11-26 07:38:15.079176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.005 [2024-11-26 07:38:15.079182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.005 [2024-11-26 07:38:15.079188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.005 [2024-11-26 07:38:15.079203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.005 qpair failed and we were unable to recover it. 00:28:47.005 [2024-11-26 07:38:15.089117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.005 [2024-11-26 07:38:15.089173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.005 [2024-11-26 07:38:15.089187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.005 [2024-11-26 07:38:15.089194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.005 [2024-11-26 07:38:15.089200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.005 [2024-11-26 07:38:15.089214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.005 qpair failed and we were unable to recover it. 00:28:47.265 [2024-11-26 07:38:15.099170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.265 [2024-11-26 07:38:15.099224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.265 [2024-11-26 07:38:15.099238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.265 [2024-11-26 07:38:15.099244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.265 [2024-11-26 07:38:15.099250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.265 [2024-11-26 07:38:15.099265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.265 qpair failed and we were unable to recover it. 00:28:47.265 [2024-11-26 07:38:15.109155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.265 [2024-11-26 07:38:15.109213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.265 [2024-11-26 07:38:15.109230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.265 [2024-11-26 07:38:15.109236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.265 [2024-11-26 07:38:15.109242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.265 [2024-11-26 07:38:15.109256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.265 qpair failed and we were unable to recover it. 00:28:47.265 [2024-11-26 07:38:15.119194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.119267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.119281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.119287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.119293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.119307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.129228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.129281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.129294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.129301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.129307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.129321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.139238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.139293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.139307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.139314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.139320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.139334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.149321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.149381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.149394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.149401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.149407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.149425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.159321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.159378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.159392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.159398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.159404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.159419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.169317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.169374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.169389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.169396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.169403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.169420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.179397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.179458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.179472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.179479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.179485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.179499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.189398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.189457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.189472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.189479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.189485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.189499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.199455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.199512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.199526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.199533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.199539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.199554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.209444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.209501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.209514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.209521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.209527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.209542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.219477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.219528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.219541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.219547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.219554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.219568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.229511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.229570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.229584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.229591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.229597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.229612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.266 qpair failed and we were unable to recover it. 00:28:47.266 [2024-11-26 07:38:15.239540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.266 [2024-11-26 07:38:15.239618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.266 [2024-11-26 07:38:15.239635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.266 [2024-11-26 07:38:15.239642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.266 [2024-11-26 07:38:15.239648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.266 [2024-11-26 07:38:15.239662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.267 [2024-11-26 07:38:15.249558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.267 [2024-11-26 07:38:15.249652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.267 [2024-11-26 07:38:15.249667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.267 [2024-11-26 07:38:15.249673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.267 [2024-11-26 07:38:15.249679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.267 [2024-11-26 07:38:15.249694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.267 [2024-11-26 07:38:15.259657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.267 [2024-11-26 07:38:15.259742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.267 [2024-11-26 07:38:15.259756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.267 [2024-11-26 07:38:15.259762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.267 [2024-11-26 07:38:15.259768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.267 [2024-11-26 07:38:15.259783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.267 [2024-11-26 07:38:15.269683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.267 [2024-11-26 07:38:15.269741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.267 [2024-11-26 07:38:15.269755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.267 [2024-11-26 07:38:15.269762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.267 [2024-11-26 07:38:15.269768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.267 [2024-11-26 07:38:15.269782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.267 [2024-11-26 07:38:15.279666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.267 [2024-11-26 07:38:15.279721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.267 [2024-11-26 07:38:15.279735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.267 [2024-11-26 07:38:15.279742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.267 [2024-11-26 07:38:15.279751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.267 [2024-11-26 07:38:15.279766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.267 [2024-11-26 07:38:15.289689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.267 [2024-11-26 07:38:15.289771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.267 [2024-11-26 07:38:15.289785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.267 [2024-11-26 07:38:15.289792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.267 [2024-11-26 07:38:15.289798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.267 [2024-11-26 07:38:15.289813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.267 [2024-11-26 07:38:15.299709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.267 [2024-11-26 07:38:15.299762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.267 [2024-11-26 07:38:15.299776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.267 [2024-11-26 07:38:15.299782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.267 [2024-11-26 07:38:15.299788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.267 [2024-11-26 07:38:15.299803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.267 [2024-11-26 07:38:15.309748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.267 [2024-11-26 07:38:15.309806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.267 [2024-11-26 07:38:15.309820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.267 [2024-11-26 07:38:15.309827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.267 [2024-11-26 07:38:15.309833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.267 [2024-11-26 07:38:15.309848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.267 [2024-11-26 07:38:15.319768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.267 [2024-11-26 07:38:15.319852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.267 [2024-11-26 07:38:15.319866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.267 [2024-11-26 07:38:15.319873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.267 [2024-11-26 07:38:15.319878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.267 [2024-11-26 07:38:15.319893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.267 [2024-11-26 07:38:15.329838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.267 [2024-11-26 07:38:15.329899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.267 [2024-11-26 07:38:15.329913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.267 [2024-11-26 07:38:15.329920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.267 [2024-11-26 07:38:15.329926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.267 [2024-11-26 07:38:15.329940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.267 [2024-11-26 07:38:15.339810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.267 [2024-11-26 07:38:15.339869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.267 [2024-11-26 07:38:15.339883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.267 [2024-11-26 07:38:15.339889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.267 [2024-11-26 07:38:15.339895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.267 [2024-11-26 07:38:15.339910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.267 [2024-11-26 07:38:15.349897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.267 [2024-11-26 07:38:15.349962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.267 [2024-11-26 07:38:15.349977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.267 [2024-11-26 07:38:15.349984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.267 [2024-11-26 07:38:15.349990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.267 [2024-11-26 07:38:15.350004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.267 qpair failed and we were unable to recover it. 00:28:47.529 [2024-11-26 07:38:15.359888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.529 [2024-11-26 07:38:15.359986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.529 [2024-11-26 07:38:15.359999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.529 [2024-11-26 07:38:15.360006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.529 [2024-11-26 07:38:15.360012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.529 [2024-11-26 07:38:15.360026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.529 qpair failed and we were unable to recover it. 00:28:47.529 [2024-11-26 07:38:15.369924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.529 [2024-11-26 07:38:15.369988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.529 [2024-11-26 07:38:15.370005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.529 [2024-11-26 07:38:15.370011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.529 [2024-11-26 07:38:15.370017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.529 [2024-11-26 07:38:15.370032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.529 qpair failed and we were unable to recover it. 00:28:47.529 [2024-11-26 07:38:15.379933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.529 [2024-11-26 07:38:15.379995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.529 [2024-11-26 07:38:15.380009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.529 [2024-11-26 07:38:15.380016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.529 [2024-11-26 07:38:15.380021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.529 [2024-11-26 07:38:15.380036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.529 qpair failed and we were unable to recover it. 00:28:47.529 [2024-11-26 07:38:15.389965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.529 [2024-11-26 07:38:15.390023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.529 [2024-11-26 07:38:15.390038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.529 [2024-11-26 07:38:15.390045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.529 [2024-11-26 07:38:15.390051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.529 [2024-11-26 07:38:15.390066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.529 qpair failed and we were unable to recover it. 00:28:47.529 [2024-11-26 07:38:15.399998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.529 [2024-11-26 07:38:15.400057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.529 [2024-11-26 07:38:15.400071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.529 [2024-11-26 07:38:15.400077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.529 [2024-11-26 07:38:15.400083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.529 [2024-11-26 07:38:15.400098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.529 qpair failed and we were unable to recover it. 00:28:47.529 [2024-11-26 07:38:15.410017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.529 [2024-11-26 07:38:15.410074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.529 [2024-11-26 07:38:15.410088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.529 [2024-11-26 07:38:15.410097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.529 [2024-11-26 07:38:15.410103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.529 [2024-11-26 07:38:15.410118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.529 qpair failed and we were unable to recover it. 00:28:47.529 [2024-11-26 07:38:15.420059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.529 [2024-11-26 07:38:15.420112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.529 [2024-11-26 07:38:15.420125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.529 [2024-11-26 07:38:15.420132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.529 [2024-11-26 07:38:15.420138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.420152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.430142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.430199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.430213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.430219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.430225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.430240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.440140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.440202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.440216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.440223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.440229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.440243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.450141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.450199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.450211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.450218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.450224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.450239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.460090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.460152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.460166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.460172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.460178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.460192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.470246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.470307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.470321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.470328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.470333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.470348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.480224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.480280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.480295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.480303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.480308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.480323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.490248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.490303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.490317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.490324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.490330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.490345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.500199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.500261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.500274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.500281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.500287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.500302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.510243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.510345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.510360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.510366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.510372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.510387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.520347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.520407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.520421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.520428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.520434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.520449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.530452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.530507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.530521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.530528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.530534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.530548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.540437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.540491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.540505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.540515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.540521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.540536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.530 qpair failed and we were unable to recover it. 00:28:47.530 [2024-11-26 07:38:15.550498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.530 [2024-11-26 07:38:15.550561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.530 [2024-11-26 07:38:15.550575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.530 [2024-11-26 07:38:15.550581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.530 [2024-11-26 07:38:15.550587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.530 [2024-11-26 07:38:15.550602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.531 qpair failed and we were unable to recover it. 00:28:47.531 [2024-11-26 07:38:15.560478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.531 [2024-11-26 07:38:15.560547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.531 [2024-11-26 07:38:15.560561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.531 [2024-11-26 07:38:15.560568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.531 [2024-11-26 07:38:15.560574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.531 [2024-11-26 07:38:15.560588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.531 qpair failed and we were unable to recover it. 00:28:47.531 [2024-11-26 07:38:15.570495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.531 [2024-11-26 07:38:15.570592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.531 [2024-11-26 07:38:15.570606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.531 [2024-11-26 07:38:15.570612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.531 [2024-11-26 07:38:15.570618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.531 [2024-11-26 07:38:15.570633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.531 qpair failed and we were unable to recover it. 00:28:47.531 [2024-11-26 07:38:15.580514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.531 [2024-11-26 07:38:15.580569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.531 [2024-11-26 07:38:15.580583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.531 [2024-11-26 07:38:15.580590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.531 [2024-11-26 07:38:15.580595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.531 [2024-11-26 07:38:15.580614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.531 qpair failed and we were unable to recover it. 00:28:47.531 [2024-11-26 07:38:15.590544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.531 [2024-11-26 07:38:15.590607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.531 [2024-11-26 07:38:15.590621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.531 [2024-11-26 07:38:15.590629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.531 [2024-11-26 07:38:15.590635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.531 [2024-11-26 07:38:15.590650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.531 qpair failed and we were unable to recover it. 00:28:47.531 [2024-11-26 07:38:15.600602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.531 [2024-11-26 07:38:15.600685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.531 [2024-11-26 07:38:15.600700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.531 [2024-11-26 07:38:15.600706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.531 [2024-11-26 07:38:15.600712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.531 [2024-11-26 07:38:15.600727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.531 qpair failed and we were unable to recover it. 00:28:47.531 [2024-11-26 07:38:15.610609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.531 [2024-11-26 07:38:15.610667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.531 [2024-11-26 07:38:15.610681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.531 [2024-11-26 07:38:15.610688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.531 [2024-11-26 07:38:15.610694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.531 [2024-11-26 07:38:15.610709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.531 qpair failed and we were unable to recover it. 00:28:47.531 [2024-11-26 07:38:15.620695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.531 [2024-11-26 07:38:15.620795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.531 [2024-11-26 07:38:15.620809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.531 [2024-11-26 07:38:15.620816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.531 [2024-11-26 07:38:15.620822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.531 [2024-11-26 07:38:15.620837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.531 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.630608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.630670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.630684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.630691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.630697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.630712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.640728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.640781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.640795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.640802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.640808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.640823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.650719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.650778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.650792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.650799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.650805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.650820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.660741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.660799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.660813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.660820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.660826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.660841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.670780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.670838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.670856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.670863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.670869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.670884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.680849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.680902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.680917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.680924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.680930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.680944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.690866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.690921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.690936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.690943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.690953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.690968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.700794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.700852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.700866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.700873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.700879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.700894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.710823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.710882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.710895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.710902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.710911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.710926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.720939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.721006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.721020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.721027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.721033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.721048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.730962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.731015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.731030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.731037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.731043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.731058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.741002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.741057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.741072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.741079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.741085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.741100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.751012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.751068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.751084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.751091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.751097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.751112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.761025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.761085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.761099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.761106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.761112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.761126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.771012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.771099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.771113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.771120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.771126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.771140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.792 [2024-11-26 07:38:15.781041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.792 [2024-11-26 07:38:15.781098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.792 [2024-11-26 07:38:15.781113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.792 [2024-11-26 07:38:15.781119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.792 [2024-11-26 07:38:15.781125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.792 [2024-11-26 07:38:15.781140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.792 qpair failed and we were unable to recover it. 00:28:47.793 [2024-11-26 07:38:15.791162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.793 [2024-11-26 07:38:15.791220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.793 [2024-11-26 07:38:15.791235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.793 [2024-11-26 07:38:15.791241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.793 [2024-11-26 07:38:15.791247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.793 [2024-11-26 07:38:15.791262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.793 qpair failed and we were unable to recover it. 00:28:47.793 [2024-11-26 07:38:15.801117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.793 [2024-11-26 07:38:15.801176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.793 [2024-11-26 07:38:15.801195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.793 [2024-11-26 07:38:15.801202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.793 [2024-11-26 07:38:15.801208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.793 [2024-11-26 07:38:15.801223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.793 qpair failed and we were unable to recover it. 00:28:47.793 [2024-11-26 07:38:15.811131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.793 [2024-11-26 07:38:15.811188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.793 [2024-11-26 07:38:15.811202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.793 [2024-11-26 07:38:15.811209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.793 [2024-11-26 07:38:15.811215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.793 [2024-11-26 07:38:15.811229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.793 qpair failed and we were unable to recover it. 00:28:47.793 [2024-11-26 07:38:15.821259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.793 [2024-11-26 07:38:15.821350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.793 [2024-11-26 07:38:15.821363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.793 [2024-11-26 07:38:15.821370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.793 [2024-11-26 07:38:15.821376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.793 [2024-11-26 07:38:15.821390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.793 qpair failed and we were unable to recover it. 00:28:47.793 [2024-11-26 07:38:15.831305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.793 [2024-11-26 07:38:15.831370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.793 [2024-11-26 07:38:15.831384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.793 [2024-11-26 07:38:15.831391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.793 [2024-11-26 07:38:15.831397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.793 [2024-11-26 07:38:15.831411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.793 qpair failed and we were unable to recover it. 00:28:47.793 [2024-11-26 07:38:15.841223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.793 [2024-11-26 07:38:15.841319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.793 [2024-11-26 07:38:15.841333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.793 [2024-11-26 07:38:15.841340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.793 [2024-11-26 07:38:15.841350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.793 [2024-11-26 07:38:15.841365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.793 qpair failed and we were unable to recover it. 00:28:47.793 [2024-11-26 07:38:15.851332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.793 [2024-11-26 07:38:15.851384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.793 [2024-11-26 07:38:15.851398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.793 [2024-11-26 07:38:15.851404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.793 [2024-11-26 07:38:15.851410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.793 [2024-11-26 07:38:15.851425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.793 qpair failed and we were unable to recover it. 00:28:47.793 [2024-11-26 07:38:15.861329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.793 [2024-11-26 07:38:15.861386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.793 [2024-11-26 07:38:15.861400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.793 [2024-11-26 07:38:15.861408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.793 [2024-11-26 07:38:15.861414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76cc000b90 00:28:47.793 [2024-11-26 07:38:15.861427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.793 qpair failed and we were unable to recover it. 00:28:47.793 [2024-11-26 07:38:15.871350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.793 [2024-11-26 07:38:15.871422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.793 [2024-11-26 07:38:15.871449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.793 [2024-11-26 07:38:15.871461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.793 [2024-11-26 07:38:15.871471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c0000b90 00:28:47.793 [2024-11-26 07:38:15.871495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:47.793 qpair failed and we were unable to recover it. 00:28:47.793 [2024-11-26 07:38:15.881406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.793 [2024-11-26 07:38:15.881462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.793 [2024-11-26 07:38:15.881477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.793 [2024-11-26 07:38:15.881485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.793 [2024-11-26 07:38:15.881490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c0000b90 00:28:47.793 [2024-11-26 07:38:15.881506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:47.793 qpair failed and we were unable to recover it. 00:28:48.051 [2024-11-26 07:38:15.891475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.051 [2024-11-26 07:38:15.891531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.051 [2024-11-26 07:38:15.891546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.051 [2024-11-26 07:38:15.891553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.051 [2024-11-26 07:38:15.891559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c0000b90 00:28:48.051 [2024-11-26 07:38:15.891574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:48.051 qpair failed and we were unable to recover it. 00:28:48.051 [2024-11-26 07:38:15.901410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.051 [2024-11-26 07:38:15.901463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.051 [2024-11-26 07:38:15.901485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.051 [2024-11-26 07:38:15.901493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.051 [2024-11-26 07:38:15.901499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:48.051 [2024-11-26 07:38:15.901518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.051 qpair failed and we were unable to recover it. 00:28:48.051 [2024-11-26 07:38:15.911508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.051 [2024-11-26 07:38:15.911565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.051 [2024-11-26 07:38:15.911580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.051 [2024-11-26 07:38:15.911587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.051 [2024-11-26 07:38:15.911593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f76c4000b90 00:28:48.051 [2024-11-26 07:38:15.911608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:48.051 qpair failed and we were unable to recover it. 00:28:48.051 [2024-11-26 07:38:15.911685] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:48.051 A controller has encountered a failure and is being reset. 00:28:48.051 [2024-11-26 07:38:15.921518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.051 [2024-11-26 07:38:15.921593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.051 [2024-11-26 07:38:15.921619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.051 [2024-11-26 07:38:15.921630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.051 [2024-11-26 07:38:15.921639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:48.051 [2024-11-26 07:38:15.921663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.051 qpair failed and we were unable to recover it. 00:28:48.051 [2024-11-26 07:38:15.931484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.051 [2024-11-26 07:38:15.931543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.051 [2024-11-26 07:38:15.931558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.051 [2024-11-26 07:38:15.931565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.051 [2024-11-26 07:38:15.931571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f24ba0 00:28:48.051 [2024-11-26 07:38:15.931586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:48.051 qpair failed and we were unable to recover it. 00:28:48.051 Controller properly reset. 00:28:48.051 Initializing NVMe Controllers 00:28:48.051 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:48.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:48.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:48.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:48.051 Initialization complete. Launching workers. 00:28:48.051 Starting thread on core 1 00:28:48.051 Starting thread on core 2 00:28:48.051 Starting thread on core 3 00:28:48.051 Starting thread on core 0 00:28:48.051 07:38:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:48.051 00:28:48.051 real 0m10.679s 00:28:48.051 user 0m19.331s 00:28:48.051 sys 0m4.615s 00:28:48.051 07:38:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.052 07:38:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.052 ************************************ 00:28:48.052 END TEST nvmf_target_disconnect_tc2 00:28:48.052 ************************************ 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:48.052 rmmod nvme_tcp 00:28:48.052 rmmod nvme_fabrics 00:28:48.052 rmmod nvme_keyring 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 896417 ']' 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 896417 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 896417 ']' 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 896417 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.052 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 896417 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 896417' 00:28:48.310 killing process with pid 896417 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 896417 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 896417 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.310 07:38:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.845 07:38:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.845 00:28:50.845 real 0m18.993s 00:28:50.845 user 0m46.603s 00:28:50.845 sys 0m9.141s 00:28:50.845 07:38:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.845 07:38:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:50.845 ************************************ 00:28:50.845 END TEST nvmf_target_disconnect 00:28:50.845 ************************************ 00:28:50.845 07:38:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:50.845 00:28:50.845 real 5m41.554s 00:28:50.845 user 10m24.814s 00:28:50.845 sys 1m51.671s 00:28:50.845 07:38:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.845 07:38:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.845 ************************************ 00:28:50.845 END TEST nvmf_host 00:28:50.845 ************************************ 00:28:50.845 07:38:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:50.845 07:38:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:50.845 07:38:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:50.845 07:38:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:50.845 07:38:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.845 07:38:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:50.845 ************************************ 00:28:50.845 START TEST nvmf_target_core_interrupt_mode 00:28:50.845 ************************************ 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:50.845 * Looking for test storage... 00:28:50.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:50.845 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:50.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.846 --rc genhtml_branch_coverage=1 00:28:50.846 --rc genhtml_function_coverage=1 00:28:50.846 --rc genhtml_legend=1 00:28:50.846 --rc geninfo_all_blocks=1 00:28:50.846 --rc geninfo_unexecuted_blocks=1 00:28:50.846 00:28:50.846 ' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:50.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.846 --rc genhtml_branch_coverage=1 00:28:50.846 --rc genhtml_function_coverage=1 00:28:50.846 --rc genhtml_legend=1 00:28:50.846 --rc geninfo_all_blocks=1 00:28:50.846 --rc geninfo_unexecuted_blocks=1 00:28:50.846 00:28:50.846 ' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:50.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.846 --rc genhtml_branch_coverage=1 00:28:50.846 --rc genhtml_function_coverage=1 00:28:50.846 --rc genhtml_legend=1 00:28:50.846 --rc geninfo_all_blocks=1 00:28:50.846 --rc geninfo_unexecuted_blocks=1 00:28:50.846 00:28:50.846 ' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:50.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.846 --rc genhtml_branch_coverage=1 00:28:50.846 --rc genhtml_function_coverage=1 00:28:50.846 --rc genhtml_legend=1 00:28:50.846 --rc geninfo_all_blocks=1 00:28:50.846 --rc geninfo_unexecuted_blocks=1 00:28:50.846 00:28:50.846 ' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:50.846 ************************************ 00:28:50.846 START TEST nvmf_abort 00:28:50.846 ************************************ 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:50.846 * Looking for test storage... 00:28:50.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:50.846 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:50.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.847 --rc genhtml_branch_coverage=1 00:28:50.847 --rc genhtml_function_coverage=1 00:28:50.847 --rc genhtml_legend=1 00:28:50.847 --rc geninfo_all_blocks=1 00:28:50.847 --rc geninfo_unexecuted_blocks=1 00:28:50.847 00:28:50.847 ' 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:50.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.847 --rc genhtml_branch_coverage=1 00:28:50.847 --rc genhtml_function_coverage=1 00:28:50.847 --rc genhtml_legend=1 00:28:50.847 --rc geninfo_all_blocks=1 00:28:50.847 --rc geninfo_unexecuted_blocks=1 00:28:50.847 00:28:50.847 ' 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:50.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.847 --rc genhtml_branch_coverage=1 00:28:50.847 --rc genhtml_function_coverage=1 00:28:50.847 --rc genhtml_legend=1 00:28:50.847 --rc geninfo_all_blocks=1 00:28:50.847 --rc geninfo_unexecuted_blocks=1 00:28:50.847 00:28:50.847 ' 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:50.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:50.847 --rc genhtml_branch_coverage=1 00:28:50.847 --rc genhtml_function_coverage=1 00:28:50.847 --rc genhtml_legend=1 00:28:50.847 --rc geninfo_all_blocks=1 00:28:50.847 --rc geninfo_unexecuted_blocks=1 00:28:50.847 00:28:50.847 ' 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.847 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.107 07:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:56.383 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:56.383 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:56.383 Found net devices under 0000:86:00.0: cvl_0_0 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.383 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:56.384 Found net devices under 0000:86:00.1: cvl_0_1 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:56.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:28:56.384 00:28:56.384 --- 10.0.0.2 ping statistics --- 00:28:56.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.384 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:28:56.384 00:28:56.384 --- 10.0.0.1 ping statistics --- 00:28:56.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.384 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:56.384 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=901117 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 901117 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 901117 ']' 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.644 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.644 [2024-11-26 07:38:24.560216] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:56.644 [2024-11-26 07:38:24.561180] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:28:56.644 [2024-11-26 07:38:24.561216] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.644 [2024-11-26 07:38:24.627778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:56.644 [2024-11-26 07:38:24.669836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.644 [2024-11-26 07:38:24.669875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.644 [2024-11-26 07:38:24.669882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.644 [2024-11-26 07:38:24.669888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.644 [2024-11-26 07:38:24.669893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.644 [2024-11-26 07:38:24.671209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.644 [2024-11-26 07:38:24.671296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:56.644 [2024-11-26 07:38:24.671298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.644 [2024-11-26 07:38:24.737663] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:56.644 [2024-11-26 07:38:24.737688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:56.645 [2024-11-26 07:38:24.737971] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:56.645 [2024-11-26 07:38:24.738018] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.905 [2024-11-26 07:38:24.804047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.905 Malloc0 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.905 Delay0 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.905 [2024-11-26 07:38:24.875914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.905 07:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:57.165 [2024-11-26 07:38:25.037097] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:59.073 Initializing NVMe Controllers 00:28:59.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:59.073 controller IO queue size 128 less than required 00:28:59.073 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:59.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:59.073 Initialization complete. Launching workers. 00:28:59.073 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37074 00:28:59.073 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37131, failed to submit 66 00:28:59.073 success 37074, unsuccessful 57, failed 0 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.073 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.073 rmmod nvme_tcp 00:28:59.073 rmmod nvme_fabrics 00:28:59.073 rmmod nvme_keyring 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 901117 ']' 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 901117 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 901117 ']' 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 901117 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 901117 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:59.332 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 901117' 00:28:59.332 killing process with pid 901117 00:28:59.333 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 901117 00:28:59.333 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 901117 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.591 07:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.496 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.496 00:29:01.496 real 0m10.745s 00:29:01.496 user 0m10.335s 00:29:01.496 sys 0m5.418s 00:29:01.496 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.496 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:01.496 ************************************ 00:29:01.496 END TEST nvmf_abort 00:29:01.496 ************************************ 00:29:01.496 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:01.496 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:01.496 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.496 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:01.496 ************************************ 00:29:01.496 START TEST nvmf_ns_hotplug_stress 00:29:01.496 ************************************ 00:29:01.496 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:01.756 * Looking for test storage... 00:29:01.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:01.756 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:01.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.757 --rc genhtml_branch_coverage=1 00:29:01.757 --rc genhtml_function_coverage=1 00:29:01.757 --rc genhtml_legend=1 00:29:01.757 --rc geninfo_all_blocks=1 00:29:01.757 --rc geninfo_unexecuted_blocks=1 00:29:01.757 00:29:01.757 ' 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:01.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.757 --rc genhtml_branch_coverage=1 00:29:01.757 --rc genhtml_function_coverage=1 00:29:01.757 --rc genhtml_legend=1 00:29:01.757 --rc geninfo_all_blocks=1 00:29:01.757 --rc geninfo_unexecuted_blocks=1 00:29:01.757 00:29:01.757 ' 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:01.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.757 --rc genhtml_branch_coverage=1 00:29:01.757 --rc genhtml_function_coverage=1 00:29:01.757 --rc genhtml_legend=1 00:29:01.757 --rc geninfo_all_blocks=1 00:29:01.757 --rc geninfo_unexecuted_blocks=1 00:29:01.757 00:29:01.757 ' 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:01.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.757 --rc genhtml_branch_coverage=1 00:29:01.757 --rc genhtml_function_coverage=1 00:29:01.757 --rc genhtml_legend=1 00:29:01.757 --rc geninfo_all_blocks=1 00:29:01.757 --rc geninfo_unexecuted_blocks=1 00:29:01.757 00:29:01.757 ' 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.757 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:01.758 07:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:07.032 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:07.033 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:07.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:07.033 Found net devices under 0000:86:00.0: cvl_0_0 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:07.033 Found net devices under 0000:86:00.1: cvl_0_1 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.033 07:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.033 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.293 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.293 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.293 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:29:07.294 00:29:07.294 --- 10.0.0.2 ping statistics --- 00:29:07.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.294 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:29:07.294 00:29:07.294 --- 10.0.0.1 ping statistics --- 00:29:07.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.294 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=904934 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 904934 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 904934 ']' 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.294 07:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:07.294 [2024-11-26 07:38:35.324479] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:07.294 [2024-11-26 07:38:35.325395] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:29:07.294 [2024-11-26 07:38:35.325427] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.553 [2024-11-26 07:38:35.399938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:07.553 [2024-11-26 07:38:35.451725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.553 [2024-11-26 07:38:35.451768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.553 [2024-11-26 07:38:35.451779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.553 [2024-11-26 07:38:35.451788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.553 [2024-11-26 07:38:35.451795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.553 [2024-11-26 07:38:35.453644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.553 [2024-11-26 07:38:35.453731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.553 [2024-11-26 07:38:35.453734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.553 [2024-11-26 07:38:35.531810] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:07.553 [2024-11-26 07:38:35.531935] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:07.553 [2024-11-26 07:38:35.532054] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:07.553 [2024-11-26 07:38:35.532158] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:08.120 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.120 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:08.120 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.120 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.120 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:08.120 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.120 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:08.120 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:08.379 [2024-11-26 07:38:36.362525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.379 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:08.638 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.897 [2024-11-26 07:38:36.738978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.897 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:08.897 07:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:09.155 Malloc0 00:29:09.156 07:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:09.414 Delay0 00:29:09.414 07:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.673 07:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:09.673 NULL1 00:29:09.673 07:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:09.931 07:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=905423 00:29:09.931 07:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:09.931 07:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:09.931 07:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.190 07:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.449 07:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:10.449 07:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:10.449 true 00:29:10.449 07:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:10.449 07:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.707 07:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.965 07:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:10.965 07:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:11.223 true 00:29:11.223 07:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:11.223 07:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.482 Read completed with error (sct=0, sc=11) 00:29:11.482 07:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.740 07:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:11.740 07:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:11.740 true 00:29:11.740 07:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:11.740 07:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.676 07:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.934 07:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:12.934 07:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:12.934 true 00:29:12.934 07:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:12.934 07:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.193 07:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.451 07:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:13.451 07:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:13.710 true 00:29:13.710 07:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:13.710 07:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.645 07:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.904 07:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:14.904 07:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:15.162 true 00:29:15.162 07:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:15.162 07:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:16.124 07:38:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:16.124 07:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:16.124 07:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:16.383 true 00:29:16.383 07:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:16.383 07:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.641 07:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.899 07:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:16.899 07:38:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:16.899 true 00:29:17.157 07:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:17.157 07:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.092 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:18.092 07:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.351 07:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:18.351 07:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:18.642 true 00:29:18.642 07:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:18.642 07:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.642 07:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.934 07:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:18.934 07:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:19.217 true 00:29:19.217 07:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:19.217 07:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.157 07:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.416 07:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:20.416 07:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:20.675 true 00:29:20.675 07:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:20.675 07:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.614 07:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.614 07:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:21.614 07:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:21.874 true 00:29:21.874 07:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:21.874 07:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.132 07:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:22.132 07:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:22.132 07:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:22.391 true 00:29:22.391 07:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:22.391 07:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.769 07:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.769 07:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:23.769 07:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:24.028 true 00:29:24.028 07:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:24.028 07:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:24.964 07:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.964 07:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:24.964 07:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:25.223 true 00:29:25.223 07:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:25.223 07:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:25.482 07:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:25.741 07:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:25.741 07:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:25.741 true 00:29:25.741 07:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:25.741 07:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.118 07:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.118 07:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:27.118 07:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:27.377 true 00:29:27.377 07:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:27.377 07:38:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.312 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:28.312 07:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.312 07:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:28.312 07:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:28.571 true 00:29:28.571 07:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:28.571 07:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.830 07:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.089 07:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:29.089 07:38:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:29.089 true 00:29:29.089 07:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:29.089 07:38:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.466 07:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.466 07:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:30.466 07:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:30.726 true 00:29:30.726 07:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:30.726 07:38:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.662 07:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.662 07:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:31.662 07:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:31.920 true 00:29:31.920 07:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:31.920 07:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.179 07:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.438 07:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:32.438 07:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:32.438 true 00:29:32.697 07:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:32.697 07:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.633 07:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.892 07:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:33.892 07:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:34.150 true 00:29:34.150 07:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:34.150 07:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.086 07:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:35.086 07:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:35.086 07:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:35.345 true 00:29:35.345 07:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:35.345 07:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.603 07:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:35.603 07:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:35.603 07:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:35.862 true 00:29:35.862 07:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:35.862 07:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:36.798 07:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.057 07:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:37.057 07:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:37.315 true 00:29:37.315 07:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:37.315 07:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.573 07:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.573 07:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:37.574 07:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:37.832 true 00:29:37.832 07:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:37.832 07:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:39.209 07:39:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:39.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:39.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:39.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:39.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:39.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:39.209 07:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:39.209 07:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:39.468 true 00:29:39.468 07:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:39.468 07:39:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.404 07:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:40.404 Initializing NVMe Controllers 00:29:40.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.404 Controller IO queue size 128, less than required. 00:29:40.404 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.404 Controller IO queue size 128, less than required. 00:29:40.404 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:40.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:40.404 Initialization complete. Launching workers. 00:29:40.404 ======================================================== 00:29:40.404 Latency(us) 00:29:40.404 Device Information : IOPS MiB/s Average min max 00:29:40.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2020.39 0.99 41778.33 1768.78 1014812.85 00:29:40.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16917.12 8.26 7546.69 1607.83 308587.71 00:29:40.404 ======================================================== 00:29:40.404 Total : 18937.52 9.25 11198.77 1607.83 1014812.85 00:29:40.404 00:29:40.404 07:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:40.404 07:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:40.663 true 00:29:40.663 07:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 905423 00:29:40.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (905423) - No such process 00:29:40.663 07:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 905423 00:29:40.663 07:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.922 07:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:41.181 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:41.181 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:41.181 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:41.181 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:41.181 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:41.181 null0 00:29:41.181 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:41.181 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:41.181 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:41.440 null1 00:29:41.440 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:41.440 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:41.440 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:41.698 null2 00:29:41.698 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:41.698 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:41.698 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:41.957 null3 00:29:41.957 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:41.957 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:41.957 07:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:41.957 null4 00:29:41.957 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:41.957 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:41.957 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:42.216 null5 00:29:42.216 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:42.216 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:42.216 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:42.475 null6 00:29:42.475 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:42.475 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:42.475 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:42.735 null7 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 911279 911280 911282 911284 911286 911289 911290 911292 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:42.735 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:42.994 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:42.994 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:42.994 07:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.995 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:43.254 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:43.254 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:43.254 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:43.254 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:43.254 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.254 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:43.254 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:43.254 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.513 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.773 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:44.032 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.032 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.032 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:44.032 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.032 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.032 07:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:44.032 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:44.032 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:44.032 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:44.032 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:44.032 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:44.032 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:44.032 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.032 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.292 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:44.553 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:44.553 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:44.553 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:44.553 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:44.553 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.553 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:44.553 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:44.553 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:44.812 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:45.071 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:45.071 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:45.071 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.071 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:45.071 07:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:45.071 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.071 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.071 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:45.071 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.071 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.071 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.072 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:45.331 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:45.331 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:45.331 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:45.331 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.331 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:45.331 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:45.331 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:45.331 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.590 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:45.849 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.109 07:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:46.109 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:46.109 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:46.109 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.109 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:46.109 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:46.109 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:46.109 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:46.109 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.368 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:46.628 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:46.628 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:46.628 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:46.628 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.628 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:46.628 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:46.628 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:46.628 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:46.887 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:46.888 rmmod nvme_tcp 00:29:46.888 rmmod nvme_fabrics 00:29:46.888 rmmod nvme_keyring 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 904934 ']' 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 904934 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 904934 ']' 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 904934 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 904934 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 904934' 00:29:46.888 killing process with pid 904934 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 904934 00:29:46.888 07:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 904934 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.147 07:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:49.683 00:29:49.683 real 0m47.598s 00:29:49.683 user 2m57.480s 00:29:49.683 sys 0m20.229s 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:49.683 ************************************ 00:29:49.683 END TEST nvmf_ns_hotplug_stress 00:29:49.683 ************************************ 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:49.683 ************************************ 00:29:49.683 START TEST nvmf_delete_subsystem 00:29:49.683 ************************************ 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:49.683 * Looking for test storage... 00:29:49.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.683 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:49.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.683 --rc genhtml_branch_coverage=1 00:29:49.683 --rc genhtml_function_coverage=1 00:29:49.683 --rc genhtml_legend=1 00:29:49.683 --rc geninfo_all_blocks=1 00:29:49.683 --rc geninfo_unexecuted_blocks=1 00:29:49.683 00:29:49.684 ' 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.684 --rc genhtml_branch_coverage=1 00:29:49.684 --rc genhtml_function_coverage=1 00:29:49.684 --rc genhtml_legend=1 00:29:49.684 --rc geninfo_all_blocks=1 00:29:49.684 --rc geninfo_unexecuted_blocks=1 00:29:49.684 00:29:49.684 ' 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.684 --rc genhtml_branch_coverage=1 00:29:49.684 --rc genhtml_function_coverage=1 00:29:49.684 --rc genhtml_legend=1 00:29:49.684 --rc geninfo_all_blocks=1 00:29:49.684 --rc geninfo_unexecuted_blocks=1 00:29:49.684 00:29:49.684 ' 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:49.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.684 --rc genhtml_branch_coverage=1 00:29:49.684 --rc genhtml_function_coverage=1 00:29:49.684 --rc genhtml_legend=1 00:29:49.684 --rc geninfo_all_blocks=1 00:29:49.684 --rc geninfo_unexecuted_blocks=1 00:29:49.684 00:29:49.684 ' 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:49.684 07:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:54.952 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:54.953 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:54.953 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:54.953 Found net devices under 0000:86:00.0: cvl_0_0 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:54.953 Found net devices under 0000:86:00.1: cvl_0_1 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:54.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:29:54.953 00:29:54.953 --- 10.0.0.2 ping statistics --- 00:29:54.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.953 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:29:54.953 00:29:54.953 --- 10.0.0.1 ping statistics --- 00:29:54.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.953 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=915535 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 915535 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 915535 ']' 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:54.953 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.954 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.954 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.954 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.954 07:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:54.954 [2024-11-26 07:39:22.943810] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:54.954 [2024-11-26 07:39:22.944745] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:29:54.954 [2024-11-26 07:39:22.944780] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.954 [2024-11-26 07:39:23.011137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:55.213 [2024-11-26 07:39:23.054031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:55.213 [2024-11-26 07:39:23.054067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:55.213 [2024-11-26 07:39:23.054074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:55.213 [2024-11-26 07:39:23.054082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:55.213 [2024-11-26 07:39:23.054088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:55.213 [2024-11-26 07:39:23.055307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.213 [2024-11-26 07:39:23.055311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.213 [2024-11-26 07:39:23.123761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:55.213 [2024-11-26 07:39:23.123884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:55.213 [2024-11-26 07:39:23.123989] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.213 [2024-11-26 07:39:23.184051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.213 [2024-11-26 07:39:23.204194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.213 NULL1 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.213 Delay0 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=915666 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:55.213 07:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:55.213 [2024-11-26 07:39:23.286671] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:57.745 07:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:57.745 07:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.745 07:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:57.745 Write completed with error (sct=0, sc=8) 00:29:57.745 Write completed with error (sct=0, sc=8) 00:29:57.745 starting I/O failed: -6 00:29:57.745 Read completed with error (sct=0, sc=8) 00:29:57.745 Read completed with error (sct=0, sc=8) 00:29:57.745 Read completed with error (sct=0, sc=8) 00:29:57.745 Read completed with error (sct=0, sc=8) 00:29:57.745 starting I/O failed: -6 00:29:57.745 Write completed with error (sct=0, sc=8) 00:29:57.745 Read completed with error (sct=0, sc=8) 00:29:57.745 Read completed with error (sct=0, sc=8) 00:29:57.745 Write completed with error (sct=0, sc=8) 00:29:57.745 starting I/O failed: -6 00:29:57.745 Read completed with error (sct=0, sc=8) 00:29:57.745 Read completed with error (sct=0, sc=8) 00:29:57.745 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 starting I/O failed: -6 00:29:57.746 [2024-11-26 07:39:25.448575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f822400d020 is same with the state(6) to be set 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Write completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.746 Read completed with error (sct=0, sc=8) 00:29:57.747 Write completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Write completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Write completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Write completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Write completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Write completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Write completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Write completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Read completed with error (sct=0, sc=8) 00:29:57.747 Write completed with error (sct=0, sc=8) 00:29:58.683 [2024-11-26 07:39:26.423435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a09a0 is same with the state(6) to be set 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 [2024-11-26 07:39:26.450104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f822400d680 is same with the state(6) to be set 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 [2024-11-26 07:39:26.450411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f4a0 is same with the state(6) to be set 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 [2024-11-26 07:39:26.450579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f2c0 is same with the state(6) to be set 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Write completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.683 Read completed with error (sct=0, sc=8) 00:29:58.684 Read completed with error (sct=0, sc=8) 00:29:58.684 Read completed with error (sct=0, sc=8) 00:29:58.684 Read completed with error (sct=0, sc=8) 00:29:58.684 Write completed with error (sct=0, sc=8) 00:29:58.684 Read completed with error (sct=0, sc=8) 00:29:58.684 Read completed with error (sct=0, sc=8) 00:29:58.684 [2024-11-26 07:39:26.451145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f860 is same with the state(6) to be set 00:29:58.684 Initializing NVMe Controllers 00:29:58.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.684 Controller IO queue size 128, less than required. 00:29:58.684 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:58.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:58.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:58.684 Initialization complete. Launching workers. 00:29:58.684 ======================================================== 00:29:58.684 Latency(us) 00:29:58.684 Device Information : IOPS MiB/s Average min max 00:29:58.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.44 0.10 943869.56 656.47 1012466.51 00:29:58.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.64 0.08 841810.00 300.88 1011367.34 00:29:58.684 ======================================================== 00:29:58.684 Total : 365.08 0.18 896445.15 300.88 1012466.51 00:29:58.684 00:29:58.684 [2024-11-26 07:39:26.451779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a09a0 (9): Bad file descriptor 00:29:58.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:58.684 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.684 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:58.684 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 915666 00:29:58.684 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 915666 00:29:58.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (915666) - No such process 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 915666 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 915666 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 915666 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:58.943 [2024-11-26 07:39:26.984015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=916139 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916139 00:29:58.943 07:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:59.201 [2024-11-26 07:39:27.053270] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:59.460 07:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:59.460 07:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916139 00:29:59.460 07:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:00.027 07:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:00.027 07:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916139 00:30:00.027 07:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:00.595 07:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:00.595 07:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916139 00:30:00.595 07:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:01.160 07:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:01.160 07:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916139 00:30:01.160 07:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:01.726 07:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:01.726 07:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916139 00:30:01.726 07:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:01.986 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:01.986 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916139 00:30:01.986 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:02.245 Initializing NVMe Controllers 00:30:02.245 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.245 Controller IO queue size 128, less than required. 00:30:02.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:02.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:02.245 Initialization complete. Launching workers. 00:30:02.245 ======================================================== 00:30:02.245 Latency(us) 00:30:02.245 Device Information : IOPS MiB/s Average min max 00:30:02.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004115.05 1000183.42 1043183.82 00:30:02.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006080.20 1000337.94 1041918.67 00:30:02.245 ======================================================== 00:30:02.245 Total : 256.00 0.12 1005097.62 1000183.42 1043183.82 00:30:02.245 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916139 00:30:02.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (916139) - No such process 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 916139 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:02.504 rmmod nvme_tcp 00:30:02.504 rmmod nvme_fabrics 00:30:02.504 rmmod nvme_keyring 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 915535 ']' 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 915535 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 915535 ']' 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 915535 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.504 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 915535 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 915535' 00:30:02.763 killing process with pid 915535 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 915535 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 915535 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.763 07:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.299 07:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:05.299 00:30:05.299 real 0m15.624s 00:30:05.299 user 0m25.930s 00:30:05.299 sys 0m5.779s 00:30:05.299 07:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.299 07:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:05.299 ************************************ 00:30:05.299 END TEST nvmf_delete_subsystem 00:30:05.299 ************************************ 00:30:05.299 07:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:05.299 07:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:05.299 07:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:05.299 07:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:05.299 ************************************ 00:30:05.299 START TEST nvmf_host_management 00:30:05.299 ************************************ 00:30:05.299 07:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:05.299 * Looking for test storage... 00:30:05.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:05.299 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:05.299 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:05.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.300 --rc genhtml_branch_coverage=1 00:30:05.300 --rc genhtml_function_coverage=1 00:30:05.300 --rc genhtml_legend=1 00:30:05.300 --rc geninfo_all_blocks=1 00:30:05.300 --rc geninfo_unexecuted_blocks=1 00:30:05.300 00:30:05.300 ' 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:05.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.300 --rc genhtml_branch_coverage=1 00:30:05.300 --rc genhtml_function_coverage=1 00:30:05.300 --rc genhtml_legend=1 00:30:05.300 --rc geninfo_all_blocks=1 00:30:05.300 --rc geninfo_unexecuted_blocks=1 00:30:05.300 00:30:05.300 ' 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:05.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.300 --rc genhtml_branch_coverage=1 00:30:05.300 --rc genhtml_function_coverage=1 00:30:05.300 --rc genhtml_legend=1 00:30:05.300 --rc geninfo_all_blocks=1 00:30:05.300 --rc geninfo_unexecuted_blocks=1 00:30:05.300 00:30:05.300 ' 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:05.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.300 --rc genhtml_branch_coverage=1 00:30:05.300 --rc genhtml_function_coverage=1 00:30:05.300 --rc genhtml_legend=1 00:30:05.300 --rc geninfo_all_blocks=1 00:30:05.300 --rc geninfo_unexecuted_blocks=1 00:30:05.300 00:30:05.300 ' 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.300 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:05.301 07:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:10.572 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:10.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:10.572 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:10.573 Found net devices under 0000:86:00.0: cvl_0_0 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:10.573 Found net devices under 0000:86:00.1: cvl_0_1 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.573 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:10.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:30:10.831 00:30:10.831 --- 10.0.0.2 ping statistics --- 00:30:10.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.831 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:30:10.831 00:30:10.831 --- 10.0.0.1 ping statistics --- 00:30:10.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.831 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=920339 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 920339 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 920339 ']' 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:10.831 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.832 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:10.832 07:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.090 [2024-11-26 07:39:38.933336] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:11.091 [2024-11-26 07:39:38.934271] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:30:11.091 [2024-11-26 07:39:38.934305] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.091 [2024-11-26 07:39:39.000728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:11.091 [2024-11-26 07:39:39.044058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.091 [2024-11-26 07:39:39.044095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.091 [2024-11-26 07:39:39.044103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.091 [2024-11-26 07:39:39.044109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.091 [2024-11-26 07:39:39.044113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.091 [2024-11-26 07:39:39.045606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.091 [2024-11-26 07:39:39.045690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.091 [2024-11-26 07:39:39.045819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.091 [2024-11-26 07:39:39.045820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:11.091 [2024-11-26 07:39:39.111794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:11.091 [2024-11-26 07:39:39.111989] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:11.091 [2024-11-26 07:39:39.112411] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:11.091 [2024-11-26 07:39:39.112449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:11.091 [2024-11-26 07:39:39.112594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:11.091 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.091 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:11.091 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:11.091 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:11.091 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.091 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.091 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:11.091 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.091 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.091 [2024-11-26 07:39:39.174485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.349 Malloc0 00:30:11.349 [2024-11-26 07:39:39.250457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=920380 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 920380 /var/tmp/bdevperf.sock 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 920380 ']' 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:11.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.349 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.350 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.350 { 00:30:11.350 "params": { 00:30:11.350 "name": "Nvme$subsystem", 00:30:11.350 "trtype": "$TEST_TRANSPORT", 00:30:11.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.350 "adrfam": "ipv4", 00:30:11.350 "trsvcid": "$NVMF_PORT", 00:30:11.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.350 "hdgst": ${hdgst:-false}, 00:30:11.350 "ddgst": ${ddgst:-false} 00:30:11.350 }, 00:30:11.350 "method": "bdev_nvme_attach_controller" 00:30:11.350 } 00:30:11.350 EOF 00:30:11.350 )") 00:30:11.350 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:11.350 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:11.350 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:11.350 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:11.350 "params": { 00:30:11.350 "name": "Nvme0", 00:30:11.350 "trtype": "tcp", 00:30:11.350 "traddr": "10.0.0.2", 00:30:11.350 "adrfam": "ipv4", 00:30:11.350 "trsvcid": "4420", 00:30:11.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:11.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:11.350 "hdgst": false, 00:30:11.350 "ddgst": false 00:30:11.350 }, 00:30:11.350 "method": "bdev_nvme_attach_controller" 00:30:11.350 }' 00:30:11.350 [2024-11-26 07:39:39.346686] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:30:11.350 [2024-11-26 07:39:39.346737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920380 ] 00:30:11.350 [2024-11-26 07:39:39.409960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.609 [2024-11-26 07:39:39.451992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.609 Running I/O for 10 seconds... 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:11.609 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:11.868 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:11.868 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:11.868 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.868 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:11.868 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.868 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:30:11.868 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:30:11.868 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:12.129 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:12.129 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:12.129 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:12.129 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:12.129 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.129 07:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:12.129 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.129 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=665 00:30:12.129 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 665 -ge 100 ']' 00:30:12.129 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:12.129 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:12.129 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:12.129 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:12.129 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.129 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:12.129 [2024-11-26 07:39:40.046279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.129 [2024-11-26 07:39:40.046531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.130 [2024-11-26 07:39:40.046539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.130 [2024-11-26 07:39:40.046545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.130 [2024-11-26 07:39:40.046551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.130 [2024-11-26 07:39:40.046558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.130 [2024-11-26 07:39:40.046563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.130 [2024-11-26 07:39:40.046570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.130 [2024-11-26 07:39:40.046576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.130 [2024-11-26 07:39:40.046582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9aec0 is same with the state(6) to be set 00:30:12.130 [2024-11-26 07:39:40.046720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.046985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.046993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.130 [2024-11-26 07:39:40.047285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.130 [2024-11-26 07:39:40.047293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.131 [2024-11-26 07:39:40.047741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.131 [2024-11-26 07:39:40.047767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:12.131 [2024-11-26 07:39:40.048710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:12.131 task offset: 98304 on job bdev=Nvme0n1 fails 00:30:12.131 00:30:12.131 Latency(us) 00:30:12.131 [2024-11-26T06:39:40.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.131 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:12.131 Job: Nvme0n1 ended in about 0.40 seconds with error 00:30:12.131 Verification LBA range: start 0x0 length 0x400 00:30:12.131 Nvme0n1 : 0.40 1918.36 119.90 159.86 0.00 29949.66 2920.63 27354.16 00:30:12.131 [2024-11-26T06:39:40.231Z] =================================================================================================================== 00:30:12.131 [2024-11-26T06:39:40.231Z] Total : 1918.36 119.90 159.86 0.00 29949.66 2920.63 27354.16 00:30:12.131 [2024-11-26 07:39:40.051150] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:12.131 [2024-11-26 07:39:40.051172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67c500 (9): Bad file descriptor 00:30:12.131 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.131 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:12.131 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.131 [2024-11-26 07:39:40.052149] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:12.131 [2024-11-26 07:39:40.052235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLO 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:12.131 CK OFFSET 0x0 len:0x400 00:30:12.132 [2024-11-26 07:39:40.052260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:12.132 [2024-11-26 07:39:40.052273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:12.132 [2024-11-26 07:39:40.052281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:12.132 [2024-11-26 07:39:40.052288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.132 [2024-11-26 07:39:40.052294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x67c500 00:30:12.132 [2024-11-26 07:39:40.052314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67c500 (9): Bad file descriptor 00:30:12.132 [2024-11-26 07:39:40.052326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:12.132 [2024-11-26 07:39:40.052332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:12.132 [2024-11-26 07:39:40.052340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:12.132 [2024-11-26 07:39:40.052348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:12.132 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.132 07:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 920380 00:30:13.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (920380) - No such process 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:13.068 { 00:30:13.068 "params": { 00:30:13.068 "name": "Nvme$subsystem", 00:30:13.068 "trtype": "$TEST_TRANSPORT", 00:30:13.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:13.068 "adrfam": "ipv4", 00:30:13.068 "trsvcid": "$NVMF_PORT", 00:30:13.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:13.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:13.068 "hdgst": ${hdgst:-false}, 00:30:13.068 "ddgst": ${ddgst:-false} 00:30:13.068 }, 00:30:13.068 "method": "bdev_nvme_attach_controller" 00:30:13.068 } 00:30:13.068 EOF 00:30:13.068 )") 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:13.068 07:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:13.068 "params": { 00:30:13.068 "name": "Nvme0", 00:30:13.068 "trtype": "tcp", 00:30:13.068 "traddr": "10.0.0.2", 00:30:13.068 "adrfam": "ipv4", 00:30:13.068 "trsvcid": "4420", 00:30:13.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.068 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:13.068 "hdgst": false, 00:30:13.068 "ddgst": false 00:30:13.068 }, 00:30:13.068 "method": "bdev_nvme_attach_controller" 00:30:13.068 }' 00:30:13.068 [2024-11-26 07:39:41.117044] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:30:13.068 [2024-11-26 07:39:41.117093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920636 ] 00:30:13.326 [2024-11-26 07:39:41.179665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.326 [2024-11-26 07:39:41.219229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.585 Running I/O for 1 seconds... 00:30:14.521 1984.00 IOPS, 124.00 MiB/s 00:30:14.521 Latency(us) 00:30:14.521 [2024-11-26T06:39:42.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.521 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:14.521 Verification LBA range: start 0x0 length 0x400 00:30:14.521 Nvme0n1 : 1.02 2009.28 125.58 0.00 0.00 31254.78 7978.30 27468.13 00:30:14.521 [2024-11-26T06:39:42.621Z] =================================================================================================================== 00:30:14.521 [2024-11-26T06:39:42.621Z] Total : 2009.28 125.58 0.00 0.00 31254.78 7978.30 27468.13 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.780 rmmod nvme_tcp 00:30:14.780 rmmod nvme_fabrics 00:30:14.780 rmmod nvme_keyring 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 920339 ']' 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 920339 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 920339 ']' 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 920339 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 920339 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 920339' 00:30:14.780 killing process with pid 920339 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 920339 00:30:14.780 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 920339 00:30:15.038 [2024-11-26 07:39:42.921585] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:15.038 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:15.038 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:15.038 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:15.039 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:15.039 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:15.039 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:15.039 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:15.039 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:15.039 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:15.039 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.039 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.039 07:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.940 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.940 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:16.940 00:30:16.940 real 0m12.068s 00:30:16.940 user 0m17.572s 00:30:16.940 sys 0m6.161s 00:30:16.940 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:16.940 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:16.940 ************************************ 00:30:16.940 END TEST nvmf_host_management 00:30:16.940 ************************************ 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:17.199 ************************************ 00:30:17.199 START TEST nvmf_lvol 00:30:17.199 ************************************ 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:17.199 * Looking for test storage... 00:30:17.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:17.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.199 --rc genhtml_branch_coverage=1 00:30:17.199 --rc genhtml_function_coverage=1 00:30:17.199 --rc genhtml_legend=1 00:30:17.199 --rc geninfo_all_blocks=1 00:30:17.199 --rc geninfo_unexecuted_blocks=1 00:30:17.199 00:30:17.199 ' 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:17.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.199 --rc genhtml_branch_coverage=1 00:30:17.199 --rc genhtml_function_coverage=1 00:30:17.199 --rc genhtml_legend=1 00:30:17.199 --rc geninfo_all_blocks=1 00:30:17.199 --rc geninfo_unexecuted_blocks=1 00:30:17.199 00:30:17.199 ' 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:17.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.199 --rc genhtml_branch_coverage=1 00:30:17.199 --rc genhtml_function_coverage=1 00:30:17.199 --rc genhtml_legend=1 00:30:17.199 --rc geninfo_all_blocks=1 00:30:17.199 --rc geninfo_unexecuted_blocks=1 00:30:17.199 00:30:17.199 ' 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:17.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.199 --rc genhtml_branch_coverage=1 00:30:17.199 --rc genhtml_function_coverage=1 00:30:17.199 --rc genhtml_legend=1 00:30:17.199 --rc geninfo_all_blocks=1 00:30:17.199 --rc geninfo_unexecuted_blocks=1 00:30:17.199 00:30:17.199 ' 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.199 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.200 07:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:22.467 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:22.467 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:22.467 Found net devices under 0000:86:00.0: cvl_0_0 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:22.467 Found net devices under 0000:86:00.1: cvl_0_1 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:22.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:30:22.467 00:30:22.467 --- 10.0.0.2 ping statistics --- 00:30:22.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.467 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:30:22.467 00:30:22.467 --- 10.0.0.1 ping statistics --- 00:30:22.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.467 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:22.467 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=924385 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 924385 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 924385 ']' 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.468 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:22.468 [2024-11-26 07:39:50.360608] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:22.468 [2024-11-26 07:39:50.361531] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:30:22.468 [2024-11-26 07:39:50.361564] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.468 [2024-11-26 07:39:50.428691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:22.468 [2024-11-26 07:39:50.470759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.468 [2024-11-26 07:39:50.470798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.468 [2024-11-26 07:39:50.470804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.468 [2024-11-26 07:39:50.470810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.468 [2024-11-26 07:39:50.470816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.468 [2024-11-26 07:39:50.472224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.468 [2024-11-26 07:39:50.472318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.468 [2024-11-26 07:39:50.472320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.468 [2024-11-26 07:39:50.539037] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:22.468 [2024-11-26 07:39:50.539132] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:22.468 [2024-11-26 07:39:50.539282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:22.468 [2024-11-26 07:39:50.539336] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:22.726 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.726 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:22.726 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:22.726 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:22.726 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:22.726 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.726 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:22.726 [2024-11-26 07:39:50.772923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.726 07:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:22.984 07:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:22.984 07:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:23.242 07:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:23.242 07:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:23.501 07:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:23.760 07:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8e08fd47-46f0-4181-a361-1048b01f9df7 00:30:23.760 07:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8e08fd47-46f0-4181-a361-1048b01f9df7 lvol 20 00:30:23.760 07:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=225b6a57-7573-424b-8f39-534539ab8daf 00:30:23.760 07:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:24.018 07:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 225b6a57-7573-424b-8f39-534539ab8daf 00:30:24.276 07:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:24.535 [2024-11-26 07:39:52.400898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.535 07:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:24.794 07:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=924657 00:30:24.794 07:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:24.794 07:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:25.731 07:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 225b6a57-7573-424b-8f39-534539ab8daf MY_SNAPSHOT 00:30:25.990 07:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=16029d73-349a-4cd3-8807-2231a2d7f946 00:30:25.990 07:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 225b6a57-7573-424b-8f39-534539ab8daf 30 00:30:26.248 07:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 16029d73-349a-4cd3-8807-2231a2d7f946 MY_CLONE 00:30:26.508 07:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6f61bc08-6a4a-45cc-b234-87902a71efce 00:30:26.508 07:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6f61bc08-6a4a-45cc-b234-87902a71efce 00:30:26.767 07:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 924657 00:30:36.746 Initializing NVMe Controllers 00:30:36.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:36.746 Controller IO queue size 128, less than required. 00:30:36.746 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:36.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:36.746 Initialization complete. Launching workers. 00:30:36.746 ======================================================== 00:30:36.746 Latency(us) 00:30:36.746 Device Information : IOPS MiB/s Average min max 00:30:36.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12322.90 48.14 10392.14 1526.08 49702.95 00:30:36.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12468.50 48.71 10269.65 3500.69 54816.44 00:30:36.746 ======================================================== 00:30:36.746 Total : 24791.40 96.84 10330.53 1526.08 54816.44 00:30:36.746 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 225b6a57-7573-424b-8f39-534539ab8daf 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e08fd47-46f0-4181-a361-1048b01f9df7 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.746 rmmod nvme_tcp 00:30:36.746 rmmod nvme_fabrics 00:30:36.746 rmmod nvme_keyring 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 924385 ']' 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 924385 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 924385 ']' 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 924385 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:36.746 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 924385 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 924385' 00:30:36.747 killing process with pid 924385 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 924385 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 924385 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.747 07:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.126 07:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.126 00:30:38.126 real 0m20.918s 00:30:38.126 user 0m55.111s 00:30:38.126 sys 0m9.246s 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:38.126 ************************************ 00:30:38.126 END TEST nvmf_lvol 00:30:38.126 ************************************ 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:38.126 ************************************ 00:30:38.126 START TEST nvmf_lvs_grow 00:30:38.126 ************************************ 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:38.126 * Looking for test storage... 00:30:38.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:38.126 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.127 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:38.127 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.127 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:38.127 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:38.127 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.127 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:38.127 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.386 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.386 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.386 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:38.386 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.386 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:38.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.386 --rc genhtml_branch_coverage=1 00:30:38.386 --rc genhtml_function_coverage=1 00:30:38.386 --rc genhtml_legend=1 00:30:38.386 --rc geninfo_all_blocks=1 00:30:38.386 --rc geninfo_unexecuted_blocks=1 00:30:38.386 00:30:38.386 ' 00:30:38.386 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:38.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.386 --rc genhtml_branch_coverage=1 00:30:38.386 --rc genhtml_function_coverage=1 00:30:38.387 --rc genhtml_legend=1 00:30:38.387 --rc geninfo_all_blocks=1 00:30:38.387 --rc geninfo_unexecuted_blocks=1 00:30:38.387 00:30:38.387 ' 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:38.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.387 --rc genhtml_branch_coverage=1 00:30:38.387 --rc genhtml_function_coverage=1 00:30:38.387 --rc genhtml_legend=1 00:30:38.387 --rc geninfo_all_blocks=1 00:30:38.387 --rc geninfo_unexecuted_blocks=1 00:30:38.387 00:30:38.387 ' 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:38.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.387 --rc genhtml_branch_coverage=1 00:30:38.387 --rc genhtml_function_coverage=1 00:30:38.387 --rc genhtml_legend=1 00:30:38.387 --rc geninfo_all_blocks=1 00:30:38.387 --rc geninfo_unexecuted_blocks=1 00:30:38.387 00:30:38.387 ' 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.387 07:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:43.659 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.659 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:43.659 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:43.659 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:43.659 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:43.659 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:43.660 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:43.660 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:43.660 Found net devices under 0000:86:00.0: cvl_0_0 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:43.660 Found net devices under 0000:86:00.1: cvl_0_1 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.660 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:43.919 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:43.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:30:43.919 00:30:43.919 --- 10.0.0.2 ping statistics --- 00:30:43.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.919 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:30:43.919 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:30:43.919 00:30:43.919 --- 10.0.0.1 ping statistics --- 00:30:43.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.919 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:30:43.919 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.919 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:43.919 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:43.919 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.919 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:43.919 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=930016 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 930016 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 930016 ']' 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:43.920 07:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:43.920 [2024-11-26 07:40:11.859473] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:43.920 [2024-11-26 07:40:11.860426] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:30:43.920 [2024-11-26 07:40:11.860461] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.920 [2024-11-26 07:40:11.924010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.920 [2024-11-26 07:40:11.962907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.920 [2024-11-26 07:40:11.962940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.920 [2024-11-26 07:40:11.962951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.920 [2024-11-26 07:40:11.962958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.920 [2024-11-26 07:40:11.962963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.920 [2024-11-26 07:40:11.963489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.180 [2024-11-26 07:40:12.029685] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:44.180 [2024-11-26 07:40:12.029895] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:44.180 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.180 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:44.180 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:44.180 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:44.180 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:44.180 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.180 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:44.180 [2024-11-26 07:40:12.264145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:44.440 ************************************ 00:30:44.440 START TEST lvs_grow_clean 00:30:44.440 ************************************ 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:44.440 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:44.700 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:44.700 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:44.700 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=92f51bdc-d906-40e9-b2a5-2aef7309f829 00:30:44.700 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 00:30:44.700 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:44.958 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:44.958 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:44.958 07:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 lvol 150 00:30:45.218 07:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3303ae04-b095-4551-acbd-c46484bce94f 00:30:45.218 07:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:45.218 07:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:45.478 [2024-11-26 07:40:13.335867] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:45.478 [2024-11-26 07:40:13.336029] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:45.478 true 00:30:45.478 07:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 00:30:45.478 07:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:45.478 07:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:45.478 07:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:45.738 07:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3303ae04-b095-4551-acbd-c46484bce94f 00:30:45.997 07:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:46.302 [2024-11-26 07:40:14.096084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.302 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:46.302 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=930317 00:30:46.302 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:46.302 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 930317 /var/tmp/bdevperf.sock 00:30:46.302 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 930317 ']' 00:30:46.302 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:46.302 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.302 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:46.302 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:46.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:46.302 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.302 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:46.302 [2024-11-26 07:40:14.354567] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:30:46.302 [2024-11-26 07:40:14.354615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930317 ] 00:30:46.594 [2024-11-26 07:40:14.416618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.594 [2024-11-26 07:40:14.459098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.594 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:46.594 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:46.594 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:46.867 Nvme0n1 00:30:46.867 07:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:47.149 [ 00:30:47.149 { 00:30:47.149 "name": "Nvme0n1", 00:30:47.149 "aliases": [ 00:30:47.149 "3303ae04-b095-4551-acbd-c46484bce94f" 00:30:47.149 ], 00:30:47.149 "product_name": "NVMe disk", 00:30:47.149 "block_size": 4096, 00:30:47.149 "num_blocks": 38912, 00:30:47.149 "uuid": "3303ae04-b095-4551-acbd-c46484bce94f", 00:30:47.149 "numa_id": 1, 00:30:47.149 "assigned_rate_limits": { 00:30:47.149 "rw_ios_per_sec": 0, 00:30:47.149 "rw_mbytes_per_sec": 0, 00:30:47.149 "r_mbytes_per_sec": 0, 00:30:47.149 "w_mbytes_per_sec": 0 00:30:47.149 }, 00:30:47.149 "claimed": false, 00:30:47.149 "zoned": false, 00:30:47.149 "supported_io_types": { 00:30:47.149 "read": true, 00:30:47.149 "write": true, 00:30:47.149 "unmap": true, 00:30:47.149 "flush": true, 00:30:47.149 "reset": true, 00:30:47.149 "nvme_admin": true, 00:30:47.149 "nvme_io": true, 00:30:47.149 "nvme_io_md": false, 00:30:47.149 "write_zeroes": true, 00:30:47.149 "zcopy": false, 00:30:47.149 "get_zone_info": false, 00:30:47.149 "zone_management": false, 00:30:47.149 "zone_append": false, 00:30:47.149 "compare": true, 00:30:47.149 "compare_and_write": true, 00:30:47.149 "abort": true, 00:30:47.149 "seek_hole": false, 00:30:47.149 "seek_data": false, 00:30:47.149 "copy": true, 00:30:47.149 "nvme_iov_md": false 00:30:47.149 }, 00:30:47.149 "memory_domains": [ 00:30:47.149 { 00:30:47.149 "dma_device_id": "system", 00:30:47.149 "dma_device_type": 1 00:30:47.149 } 00:30:47.149 ], 00:30:47.149 "driver_specific": { 00:30:47.149 "nvme": [ 00:30:47.149 { 00:30:47.149 "trid": { 00:30:47.149 "trtype": "TCP", 00:30:47.149 "adrfam": "IPv4", 00:30:47.149 "traddr": "10.0.0.2", 00:30:47.149 "trsvcid": "4420", 00:30:47.149 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:47.149 }, 00:30:47.149 "ctrlr_data": { 00:30:47.149 "cntlid": 1, 00:30:47.149 "vendor_id": "0x8086", 00:30:47.149 "model_number": "SPDK bdev Controller", 00:30:47.149 "serial_number": "SPDK0", 00:30:47.149 "firmware_revision": "25.01", 00:30:47.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:47.149 "oacs": { 00:30:47.149 "security": 0, 00:30:47.149 "format": 0, 00:30:47.149 "firmware": 0, 00:30:47.149 "ns_manage": 0 00:30:47.149 }, 00:30:47.149 "multi_ctrlr": true, 00:30:47.150 "ana_reporting": false 00:30:47.150 }, 00:30:47.150 "vs": { 00:30:47.150 "nvme_version": "1.3" 00:30:47.150 }, 00:30:47.150 "ns_data": { 00:30:47.150 "id": 1, 00:30:47.150 "can_share": true 00:30:47.150 } 00:30:47.150 } 00:30:47.150 ], 00:30:47.150 "mp_policy": "active_passive" 00:30:47.150 } 00:30:47.150 } 00:30:47.150 ] 00:30:47.150 07:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=930532 00:30:47.150 07:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:47.150 07:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:47.150 Running I/O for 10 seconds... 00:30:48.151 Latency(us) 00:30:48.151 [2024-11-26T06:40:16.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.151 Nvme0n1 : 1.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:30:48.151 [2024-11-26T06:40:16.251Z] =================================================================================================================== 00:30:48.151 [2024-11-26T06:40:16.251Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:30:48.151 00:30:49.088 07:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 00:30:49.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:49.088 Nvme0n1 : 2.00 22796.50 89.05 0.00 0.00 0.00 0.00 0.00 00:30:49.088 [2024-11-26T06:40:17.188Z] =================================================================================================================== 00:30:49.088 [2024-11-26T06:40:17.188Z] Total : 22796.50 89.05 0.00 0.00 0.00 0.00 0.00 00:30:49.088 00:30:49.347 true 00:30:49.347 07:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 00:30:49.347 07:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:49.605 07:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:49.605 07:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:49.605 07:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 930532 00:30:50.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.172 Nvme0n1 : 3.00 22881.33 89.38 0.00 0.00 0.00 0.00 0.00 00:30:50.172 [2024-11-26T06:40:18.272Z] =================================================================================================================== 00:30:50.172 [2024-11-26T06:40:18.272Z] Total : 22881.33 89.38 0.00 0.00 0.00 0.00 0.00 00:30:50.172 00:30:51.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.108 Nvme0n1 : 4.00 22967.50 89.72 0.00 0.00 0.00 0.00 0.00 00:30:51.108 [2024-11-26T06:40:19.208Z] =================================================================================================================== 00:30:51.108 [2024-11-26T06:40:19.208Z] Total : 22967.50 89.72 0.00 0.00 0.00 0.00 0.00 00:30:51.108 00:30:52.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.484 Nvme0n1 : 5.00 22996.80 89.83 0.00 0.00 0.00 0.00 0.00 00:30:52.484 [2024-11-26T06:40:20.584Z] =================================================================================================================== 00:30:52.484 [2024-11-26T06:40:20.584Z] Total : 22996.80 89.83 0.00 0.00 0.00 0.00 0.00 00:30:52.484 00:30:53.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:53.422 Nvme0n1 : 6.00 23027.00 89.95 0.00 0.00 0.00 0.00 0.00 00:30:53.422 [2024-11-26T06:40:21.522Z] =================================================================================================================== 00:30:53.422 [2024-11-26T06:40:21.522Z] Total : 23027.00 89.95 0.00 0.00 0.00 0.00 0.00 00:30:53.422 00:30:54.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:54.358 Nvme0n1 : 7.00 23055.43 90.06 0.00 0.00 0.00 0.00 0.00 00:30:54.358 [2024-11-26T06:40:22.458Z] =================================================================================================================== 00:30:54.358 [2024-11-26T06:40:22.458Z] Total : 23055.43 90.06 0.00 0.00 0.00 0.00 0.00 00:30:54.358 00:30:55.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:55.292 Nvme0n1 : 8.00 23094.50 90.21 0.00 0.00 0.00 0.00 0.00 00:30:55.292 [2024-11-26T06:40:23.392Z] =================================================================================================================== 00:30:55.292 [2024-11-26T06:40:23.392Z] Total : 23094.50 90.21 0.00 0.00 0.00 0.00 0.00 00:30:55.292 00:30:56.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:56.225 Nvme0n1 : 9.00 23117.89 90.30 0.00 0.00 0.00 0.00 0.00 00:30:56.225 [2024-11-26T06:40:24.325Z] =================================================================================================================== 00:30:56.225 [2024-11-26T06:40:24.325Z] Total : 23117.89 90.30 0.00 0.00 0.00 0.00 0.00 00:30:56.225 00:30:57.159 00:30:57.159 Latency(us) 00:30:57.159 [2024-11-26T06:40:25.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:57.159 Nvme0n1 : 10.00 23137.19 90.38 0.00 0.00 5529.00 3191.32 15842.62 00:30:57.159 [2024-11-26T06:40:25.259Z] =================================================================================================================== 00:30:57.159 [2024-11-26T06:40:25.259Z] Total : 23137.19 90.38 0.00 0.00 5529.00 3191.32 15842.62 00:30:57.159 { 00:30:57.159 "results": [ 00:30:57.159 { 00:30:57.159 "job": "Nvme0n1", 00:30:57.159 "core_mask": "0x2", 00:30:57.159 "workload": "randwrite", 00:30:57.159 "status": "finished", 00:30:57.159 "queue_depth": 128, 00:30:57.159 "io_size": 4096, 00:30:57.159 "runtime": 10.002513, 00:30:57.159 "iops": 23137.185625252376, 00:30:57.159 "mibps": 90.37963134864209, 00:30:57.159 "io_failed": 0, 00:30:57.159 "io_timeout": 0, 00:30:57.159 "avg_latency_us": 5529.000415698991, 00:30:57.159 "min_latency_us": 3191.318260869565, 00:30:57.159 "max_latency_us": 15842.615652173912 00:30:57.159 } 00:30:57.159 ], 00:30:57.159 "core_count": 1 00:30:57.159 } 00:30:57.159 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 930317 00:30:57.159 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 930317 ']' 00:30:57.159 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 930317 00:30:57.159 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:57.159 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.159 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 930317 00:30:57.418 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:57.418 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:57.418 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 930317' 00:30:57.418 killing process with pid 930317 00:30:57.418 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 930317 00:30:57.418 Received shutdown signal, test time was about 10.000000 seconds 00:30:57.418 00:30:57.418 Latency(us) 00:30:57.418 [2024-11-26T06:40:25.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.418 [2024-11-26T06:40:25.518Z] =================================================================================================================== 00:30:57.418 [2024-11-26T06:40:25.518Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:57.418 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 930317 00:30:57.418 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:57.677 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:57.936 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 00:30:57.936 07:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:58.196 [2024-11-26 07:40:26.224060] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:58.196 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 00:30:58.456 request: 00:30:58.456 { 00:30:58.456 "uuid": "92f51bdc-d906-40e9-b2a5-2aef7309f829", 00:30:58.456 "method": "bdev_lvol_get_lvstores", 00:30:58.456 "req_id": 1 00:30:58.456 } 00:30:58.456 Got JSON-RPC error response 00:30:58.456 response: 00:30:58.456 { 00:30:58.456 "code": -19, 00:30:58.456 "message": "No such device" 00:30:58.456 } 00:30:58.456 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:58.456 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:58.456 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:58.456 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:58.456 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:58.715 aio_bdev 00:30:58.715 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3303ae04-b095-4551-acbd-c46484bce94f 00:30:58.715 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3303ae04-b095-4551-acbd-c46484bce94f 00:30:58.715 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:58.715 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:58.715 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:58.715 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:58.715 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:58.974 07:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3303ae04-b095-4551-acbd-c46484bce94f -t 2000 00:30:58.974 [ 00:30:58.974 { 00:30:58.974 "name": "3303ae04-b095-4551-acbd-c46484bce94f", 00:30:58.974 "aliases": [ 00:30:58.974 "lvs/lvol" 00:30:58.974 ], 00:30:58.974 "product_name": "Logical Volume", 00:30:58.974 "block_size": 4096, 00:30:58.974 "num_blocks": 38912, 00:30:58.974 "uuid": "3303ae04-b095-4551-acbd-c46484bce94f", 00:30:58.974 "assigned_rate_limits": { 00:30:58.974 "rw_ios_per_sec": 0, 00:30:58.974 "rw_mbytes_per_sec": 0, 00:30:58.974 "r_mbytes_per_sec": 0, 00:30:58.974 "w_mbytes_per_sec": 0 00:30:58.974 }, 00:30:58.974 "claimed": false, 00:30:58.974 "zoned": false, 00:30:58.974 "supported_io_types": { 00:30:58.974 "read": true, 00:30:58.974 "write": true, 00:30:58.974 "unmap": true, 00:30:58.974 "flush": false, 00:30:58.975 "reset": true, 00:30:58.975 "nvme_admin": false, 00:30:58.975 "nvme_io": false, 00:30:58.975 "nvme_io_md": false, 00:30:58.975 "write_zeroes": true, 00:30:58.975 "zcopy": false, 00:30:58.975 "get_zone_info": false, 00:30:58.975 "zone_management": false, 00:30:58.975 "zone_append": false, 00:30:58.975 "compare": false, 00:30:58.975 "compare_and_write": false, 00:30:58.975 "abort": false, 00:30:58.975 "seek_hole": true, 00:30:58.975 "seek_data": true, 00:30:58.975 "copy": false, 00:30:58.975 "nvme_iov_md": false 00:30:58.975 }, 00:30:58.975 "driver_specific": { 00:30:58.975 "lvol": { 00:30:58.975 "lvol_store_uuid": "92f51bdc-d906-40e9-b2a5-2aef7309f829", 00:30:58.975 "base_bdev": "aio_bdev", 00:30:58.975 "thin_provision": false, 00:30:58.975 "num_allocated_clusters": 38, 00:30:58.975 "snapshot": false, 00:30:58.975 "clone": false, 00:30:58.975 "esnap_clone": false 00:30:58.975 } 00:30:58.975 } 00:30:58.975 } 00:30:58.975 ] 00:30:58.975 07:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:58.975 07:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:58.975 07:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 00:30:59.234 07:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:59.234 07:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 00:30:59.234 07:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:59.494 07:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:59.494 07:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3303ae04-b095-4551-acbd-c46484bce94f 00:30:59.754 07:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 92f51bdc-d906-40e9-b2a5-2aef7309f829 00:31:00.013 07:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:00.013 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:00.013 00:31:00.013 real 0m15.761s 00:31:00.013 user 0m15.321s 00:31:00.013 sys 0m1.462s 00:31:00.013 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:00.013 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:00.013 ************************************ 00:31:00.013 END TEST lvs_grow_clean 00:31:00.013 ************************************ 00:31:00.272 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:00.272 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:00.272 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:00.272 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:00.272 ************************************ 00:31:00.272 START TEST lvs_grow_dirty 00:31:00.272 ************************************ 00:31:00.272 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:00.273 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:00.273 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:00.273 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:00.273 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:00.273 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:00.273 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:00.273 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:00.273 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:00.273 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:00.532 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:00.532 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:00.532 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:00.532 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:00.532 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:00.790 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:00.790 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:00.790 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a090e29a-1ca2-4291-80b2-c48fe472b805 lvol 150 00:31:01.048 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3d57e1e4-f219-4f1c-af7d-156330afa49a 00:31:01.049 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:01.049 07:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:01.306 [2024-11-26 07:40:29.155869] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:01.306 [2024-11-26 07:40:29.156024] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:01.306 true 00:31:01.306 07:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:01.306 07:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:01.306 07:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:01.306 07:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:01.565 07:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3d57e1e4-f219-4f1c-af7d-156330afa49a 00:31:01.824 07:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.824 [2024-11-26 07:40:29.904300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.083 07:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:02.083 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:02.083 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=932934 00:31:02.083 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.083 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 932934 /var/tmp/bdevperf.sock 00:31:02.083 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 932934 ']' 00:31:02.083 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:02.083 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.083 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:02.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:02.083 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.083 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:02.083 [2024-11-26 07:40:30.140258] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:31:02.083 [2024-11-26 07:40:30.140305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid932934 ] 00:31:02.342 [2024-11-26 07:40:30.202382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.342 [2024-11-26 07:40:30.247846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.342 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.342 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:02.342 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:02.910 Nvme0n1 00:31:02.910 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:02.910 [ 00:31:02.910 { 00:31:02.910 "name": "Nvme0n1", 00:31:02.910 "aliases": [ 00:31:02.910 "3d57e1e4-f219-4f1c-af7d-156330afa49a" 00:31:02.910 ], 00:31:02.910 "product_name": "NVMe disk", 00:31:02.910 "block_size": 4096, 00:31:02.910 "num_blocks": 38912, 00:31:02.910 "uuid": "3d57e1e4-f219-4f1c-af7d-156330afa49a", 00:31:02.910 "numa_id": 1, 00:31:02.910 "assigned_rate_limits": { 00:31:02.910 "rw_ios_per_sec": 0, 00:31:02.910 "rw_mbytes_per_sec": 0, 00:31:02.910 "r_mbytes_per_sec": 0, 00:31:02.910 "w_mbytes_per_sec": 0 00:31:02.910 }, 00:31:02.910 "claimed": false, 00:31:02.910 "zoned": false, 00:31:02.910 "supported_io_types": { 00:31:02.910 "read": true, 00:31:02.910 "write": true, 00:31:02.910 "unmap": true, 00:31:02.910 "flush": true, 00:31:02.910 "reset": true, 00:31:02.910 "nvme_admin": true, 00:31:02.910 "nvme_io": true, 00:31:02.910 "nvme_io_md": false, 00:31:02.910 "write_zeroes": true, 00:31:02.910 "zcopy": false, 00:31:02.910 "get_zone_info": false, 00:31:02.910 "zone_management": false, 00:31:02.910 "zone_append": false, 00:31:02.910 "compare": true, 00:31:02.910 "compare_and_write": true, 00:31:02.910 "abort": true, 00:31:02.910 "seek_hole": false, 00:31:02.910 "seek_data": false, 00:31:02.910 "copy": true, 00:31:02.910 "nvme_iov_md": false 00:31:02.910 }, 00:31:02.910 "memory_domains": [ 00:31:02.910 { 00:31:02.910 "dma_device_id": "system", 00:31:02.910 "dma_device_type": 1 00:31:02.910 } 00:31:02.910 ], 00:31:02.910 "driver_specific": { 00:31:02.910 "nvme": [ 00:31:02.910 { 00:31:02.910 "trid": { 00:31:02.910 "trtype": "TCP", 00:31:02.910 "adrfam": "IPv4", 00:31:02.910 "traddr": "10.0.0.2", 00:31:02.910 "trsvcid": "4420", 00:31:02.910 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:02.910 }, 00:31:02.910 "ctrlr_data": { 00:31:02.910 "cntlid": 1, 00:31:02.910 "vendor_id": "0x8086", 00:31:02.910 "model_number": "SPDK bdev Controller", 00:31:02.910 "serial_number": "SPDK0", 00:31:02.910 "firmware_revision": "25.01", 00:31:02.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.910 "oacs": { 00:31:02.910 "security": 0, 00:31:02.910 "format": 0, 00:31:02.910 "firmware": 0, 00:31:02.910 "ns_manage": 0 00:31:02.910 }, 00:31:02.910 "multi_ctrlr": true, 00:31:02.910 "ana_reporting": false 00:31:02.910 }, 00:31:02.910 "vs": { 00:31:02.910 "nvme_version": "1.3" 00:31:02.910 }, 00:31:02.910 "ns_data": { 00:31:02.910 "id": 1, 00:31:02.910 "can_share": true 00:31:02.910 } 00:31:02.910 } 00:31:02.910 ], 00:31:02.910 "mp_policy": "active_passive" 00:31:02.910 } 00:31:02.910 } 00:31:02.910 ] 00:31:02.910 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=933120 00:31:02.910 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:02.910 07:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:03.169 Running I/O for 10 seconds... 00:31:04.103 Latency(us) 00:31:04.103 [2024-11-26T06:40:32.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:04.103 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:31:04.103 [2024-11-26T06:40:32.203Z] =================================================================================================================== 00:31:04.103 [2024-11-26T06:40:32.203Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:31:04.103 00:31:05.040 07:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:05.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.041 Nvme0n1 : 2.00 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:31:05.041 [2024-11-26T06:40:33.141Z] =================================================================================================================== 00:31:05.041 [2024-11-26T06:40:33.141Z] Total : 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:31:05.041 00:31:05.300 true 00:31:05.300 07:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:05.300 07:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:05.559 07:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:05.559 07:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:05.559 07:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 933120 00:31:06.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.127 Nvme0n1 : 3.00 22966.00 89.71 0.00 0.00 0.00 0.00 0.00 00:31:06.127 [2024-11-26T06:40:34.228Z] =================================================================================================================== 00:31:06.128 [2024-11-26T06:40:34.228Z] Total : 22966.00 89.71 0.00 0.00 0.00 0.00 0.00 00:31:06.128 00:31:07.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.065 Nvme0n1 : 4.00 23031.00 89.96 0.00 0.00 0.00 0.00 0.00 00:31:07.065 [2024-11-26T06:40:35.165Z] =================================================================================================================== 00:31:07.065 [2024-11-26T06:40:35.165Z] Total : 23031.00 89.96 0.00 0.00 0.00 0.00 0.00 00:31:07.065 00:31:08.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.002 Nvme0n1 : 5.00 23073.00 90.13 0.00 0.00 0.00 0.00 0.00 00:31:08.002 [2024-11-26T06:40:36.102Z] =================================================================================================================== 00:31:08.002 [2024-11-26T06:40:36.102Z] Total : 23073.00 90.13 0.00 0.00 0.00 0.00 0.00 00:31:08.002 00:31:09.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.378 Nvme0n1 : 6.00 23122.17 90.32 0.00 0.00 0.00 0.00 0.00 00:31:09.378 [2024-11-26T06:40:37.478Z] =================================================================================================================== 00:31:09.378 [2024-11-26T06:40:37.478Z] Total : 23122.17 90.32 0.00 0.00 0.00 0.00 0.00 00:31:09.379 00:31:10.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:10.314 Nvme0n1 : 7.00 23157.29 90.46 0.00 0.00 0.00 0.00 0.00 00:31:10.314 [2024-11-26T06:40:38.414Z] =================================================================================================================== 00:31:10.314 [2024-11-26T06:40:38.414Z] Total : 23157.29 90.46 0.00 0.00 0.00 0.00 0.00 00:31:10.314 00:31:11.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:11.249 Nvme0n1 : 8.00 23167.75 90.50 0.00 0.00 0.00 0.00 0.00 00:31:11.249 [2024-11-26T06:40:39.349Z] =================================================================================================================== 00:31:11.249 [2024-11-26T06:40:39.349Z] Total : 23167.75 90.50 0.00 0.00 0.00 0.00 0.00 00:31:11.249 00:31:12.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:12.183 Nvme0n1 : 9.00 23190.00 90.59 0.00 0.00 0.00 0.00 0.00 00:31:12.183 [2024-11-26T06:40:40.283Z] =================================================================================================================== 00:31:12.183 [2024-11-26T06:40:40.283Z] Total : 23190.00 90.59 0.00 0.00 0.00 0.00 0.00 00:31:12.183 00:31:13.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:13.119 Nvme0n1 : 10.00 23169.70 90.51 0.00 0.00 0.00 0.00 0.00 00:31:13.119 [2024-11-26T06:40:41.219Z] =================================================================================================================== 00:31:13.119 [2024-11-26T06:40:41.219Z] Total : 23169.70 90.51 0.00 0.00 0.00 0.00 0.00 00:31:13.119 00:31:13.119 00:31:13.119 Latency(us) 00:31:13.119 [2024-11-26T06:40:41.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:13.119 Nvme0n1 : 10.01 23169.94 90.51 0.00 0.00 5521.33 3191.32 15272.74 00:31:13.119 [2024-11-26T06:40:41.219Z] =================================================================================================================== 00:31:13.119 [2024-11-26T06:40:41.219Z] Total : 23169.94 90.51 0.00 0.00 5521.33 3191.32 15272.74 00:31:13.119 { 00:31:13.119 "results": [ 00:31:13.119 { 00:31:13.119 "job": "Nvme0n1", 00:31:13.119 "core_mask": "0x2", 00:31:13.119 "workload": "randwrite", 00:31:13.119 "status": "finished", 00:31:13.119 "queue_depth": 128, 00:31:13.119 "io_size": 4096, 00:31:13.119 "runtime": 10.00542, 00:31:13.119 "iops": 23169.94189149481, 00:31:13.119 "mibps": 90.5075855136516, 00:31:13.119 "io_failed": 0, 00:31:13.119 "io_timeout": 0, 00:31:13.119 "avg_latency_us": 5521.328368838938, 00:31:13.119 "min_latency_us": 3191.318260869565, 00:31:13.119 "max_latency_us": 15272.737391304348 00:31:13.119 } 00:31:13.119 ], 00:31:13.119 "core_count": 1 00:31:13.119 } 00:31:13.119 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 932934 00:31:13.119 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 932934 ']' 00:31:13.119 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 932934 00:31:13.119 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:13.119 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.119 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 932934 00:31:13.119 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:13.119 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:13.119 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 932934' 00:31:13.119 killing process with pid 932934 00:31:13.119 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 932934 00:31:13.119 Received shutdown signal, test time was about 10.000000 seconds 00:31:13.119 00:31:13.119 Latency(us) 00:31:13.119 [2024-11-26T06:40:41.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.119 [2024-11-26T06:40:41.219Z] =================================================================================================================== 00:31:13.119 [2024-11-26T06:40:41.219Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:13.119 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 932934 00:31:13.378 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:13.636 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:13.895 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 930016 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 930016 00:31:13.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 930016 Killed "${NVMF_APP[@]}" "$@" 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=934955 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 934955 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 934955 ']' 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:13.896 07:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:14.154 [2024-11-26 07:40:42.017450] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:14.155 [2024-11-26 07:40:42.018372] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:31:14.155 [2024-11-26 07:40:42.018409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.155 [2024-11-26 07:40:42.084929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.155 [2024-11-26 07:40:42.126021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.155 [2024-11-26 07:40:42.126056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.155 [2024-11-26 07:40:42.126063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.155 [2024-11-26 07:40:42.126069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.155 [2024-11-26 07:40:42.126074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.155 [2024-11-26 07:40:42.126594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.155 [2024-11-26 07:40:42.193879] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:14.155 [2024-11-26 07:40:42.194091] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:14.155 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.155 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:14.155 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.155 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.155 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:14.413 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.413 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:14.413 [2024-11-26 07:40:42.425481] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:14.413 [2024-11-26 07:40:42.425580] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:14.413 [2024-11-26 07:40:42.425617] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:14.413 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:14.413 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3d57e1e4-f219-4f1c-af7d-156330afa49a 00:31:14.413 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3d57e1e4-f219-4f1c-af7d-156330afa49a 00:31:14.413 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:14.413 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:14.413 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:14.413 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:14.413 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:14.671 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3d57e1e4-f219-4f1c-af7d-156330afa49a -t 2000 00:31:14.930 [ 00:31:14.930 { 00:31:14.930 "name": "3d57e1e4-f219-4f1c-af7d-156330afa49a", 00:31:14.930 "aliases": [ 00:31:14.930 "lvs/lvol" 00:31:14.930 ], 00:31:14.930 "product_name": "Logical Volume", 00:31:14.930 "block_size": 4096, 00:31:14.930 "num_blocks": 38912, 00:31:14.930 "uuid": "3d57e1e4-f219-4f1c-af7d-156330afa49a", 00:31:14.930 "assigned_rate_limits": { 00:31:14.930 "rw_ios_per_sec": 0, 00:31:14.930 "rw_mbytes_per_sec": 0, 00:31:14.930 "r_mbytes_per_sec": 0, 00:31:14.930 "w_mbytes_per_sec": 0 00:31:14.930 }, 00:31:14.930 "claimed": false, 00:31:14.930 "zoned": false, 00:31:14.930 "supported_io_types": { 00:31:14.930 "read": true, 00:31:14.930 "write": true, 00:31:14.930 "unmap": true, 00:31:14.930 "flush": false, 00:31:14.930 "reset": true, 00:31:14.930 "nvme_admin": false, 00:31:14.930 "nvme_io": false, 00:31:14.930 "nvme_io_md": false, 00:31:14.930 "write_zeroes": true, 00:31:14.930 "zcopy": false, 00:31:14.930 "get_zone_info": false, 00:31:14.930 "zone_management": false, 00:31:14.930 "zone_append": false, 00:31:14.930 "compare": false, 00:31:14.930 "compare_and_write": false, 00:31:14.930 "abort": false, 00:31:14.930 "seek_hole": true, 00:31:14.930 "seek_data": true, 00:31:14.930 "copy": false, 00:31:14.930 "nvme_iov_md": false 00:31:14.930 }, 00:31:14.930 "driver_specific": { 00:31:14.930 "lvol": { 00:31:14.930 "lvol_store_uuid": "a090e29a-1ca2-4291-80b2-c48fe472b805", 00:31:14.930 "base_bdev": "aio_bdev", 00:31:14.930 "thin_provision": false, 00:31:14.930 "num_allocated_clusters": 38, 00:31:14.930 "snapshot": false, 00:31:14.930 "clone": false, 00:31:14.930 "esnap_clone": false 00:31:14.930 } 00:31:14.930 } 00:31:14.930 } 00:31:14.930 ] 00:31:14.930 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:14.930 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:14.930 07:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:15.189 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:15.189 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:15.189 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:15.189 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:15.189 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:15.448 [2024-11-26 07:40:43.419167] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:15.448 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:15.706 request: 00:31:15.706 { 00:31:15.706 "uuid": "a090e29a-1ca2-4291-80b2-c48fe472b805", 00:31:15.706 "method": "bdev_lvol_get_lvstores", 00:31:15.706 "req_id": 1 00:31:15.706 } 00:31:15.706 Got JSON-RPC error response 00:31:15.706 response: 00:31:15.706 { 00:31:15.706 "code": -19, 00:31:15.706 "message": "No such device" 00:31:15.706 } 00:31:15.706 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:15.706 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:15.706 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:15.706 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:15.706 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:15.965 aio_bdev 00:31:15.965 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3d57e1e4-f219-4f1c-af7d-156330afa49a 00:31:15.965 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3d57e1e4-f219-4f1c-af7d-156330afa49a 00:31:15.965 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:15.965 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:15.965 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:15.965 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:15.965 07:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:15.965 07:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3d57e1e4-f219-4f1c-af7d-156330afa49a -t 2000 00:31:16.223 [ 00:31:16.223 { 00:31:16.223 "name": "3d57e1e4-f219-4f1c-af7d-156330afa49a", 00:31:16.223 "aliases": [ 00:31:16.223 "lvs/lvol" 00:31:16.223 ], 00:31:16.223 "product_name": "Logical Volume", 00:31:16.223 "block_size": 4096, 00:31:16.223 "num_blocks": 38912, 00:31:16.223 "uuid": "3d57e1e4-f219-4f1c-af7d-156330afa49a", 00:31:16.223 "assigned_rate_limits": { 00:31:16.223 "rw_ios_per_sec": 0, 00:31:16.223 "rw_mbytes_per_sec": 0, 00:31:16.223 "r_mbytes_per_sec": 0, 00:31:16.223 "w_mbytes_per_sec": 0 00:31:16.223 }, 00:31:16.223 "claimed": false, 00:31:16.223 "zoned": false, 00:31:16.223 "supported_io_types": { 00:31:16.223 "read": true, 00:31:16.223 "write": true, 00:31:16.223 "unmap": true, 00:31:16.223 "flush": false, 00:31:16.223 "reset": true, 00:31:16.223 "nvme_admin": false, 00:31:16.223 "nvme_io": false, 00:31:16.223 "nvme_io_md": false, 00:31:16.223 "write_zeroes": true, 00:31:16.223 "zcopy": false, 00:31:16.223 "get_zone_info": false, 00:31:16.223 "zone_management": false, 00:31:16.223 "zone_append": false, 00:31:16.223 "compare": false, 00:31:16.223 "compare_and_write": false, 00:31:16.223 "abort": false, 00:31:16.223 "seek_hole": true, 00:31:16.223 "seek_data": true, 00:31:16.223 "copy": false, 00:31:16.223 "nvme_iov_md": false 00:31:16.223 }, 00:31:16.224 "driver_specific": { 00:31:16.224 "lvol": { 00:31:16.224 "lvol_store_uuid": "a090e29a-1ca2-4291-80b2-c48fe472b805", 00:31:16.224 "base_bdev": "aio_bdev", 00:31:16.224 "thin_provision": false, 00:31:16.224 "num_allocated_clusters": 38, 00:31:16.224 "snapshot": false, 00:31:16.224 "clone": false, 00:31:16.224 "esnap_clone": false 00:31:16.224 } 00:31:16.224 } 00:31:16.224 } 00:31:16.224 ] 00:31:16.224 07:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:16.224 07:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:16.224 07:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:16.482 07:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:16.482 07:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:16.482 07:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:16.740 07:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:16.740 07:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3d57e1e4-f219-4f1c-af7d-156330afa49a 00:31:16.998 07:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a090e29a-1ca2-4291-80b2-c48fe472b805 00:31:16.998 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:17.256 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:17.256 00:31:17.256 real 0m17.105s 00:31:17.256 user 0m34.537s 00:31:17.256 sys 0m3.724s 00:31:17.256 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.256 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:17.256 ************************************ 00:31:17.256 END TEST lvs_grow_dirty 00:31:17.256 ************************************ 00:31:17.256 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:17.256 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:17.256 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:17.256 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:17.256 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:17.256 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:17.257 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:17.257 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:17.257 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:17.257 nvmf_trace.0 00:31:17.257 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:17.257 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:17.257 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.257 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:17.257 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.257 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:17.257 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.257 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.515 rmmod nvme_tcp 00:31:17.515 rmmod nvme_fabrics 00:31:17.515 rmmod nvme_keyring 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 934955 ']' 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 934955 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 934955 ']' 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 934955 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 934955 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 934955' 00:31:17.515 killing process with pid 934955 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 934955 00:31:17.515 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 934955 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.773 07:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.679 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:19.679 00:31:19.679 real 0m41.622s 00:31:19.679 user 0m52.212s 00:31:19.679 sys 0m9.785s 00:31:19.679 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.679 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:19.679 ************************************ 00:31:19.679 END TEST nvmf_lvs_grow 00:31:19.679 ************************************ 00:31:19.679 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:19.679 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:19.679 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.679 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:19.679 ************************************ 00:31:19.679 START TEST nvmf_bdev_io_wait 00:31:19.679 ************************************ 00:31:19.679 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:19.939 * Looking for test storage... 00:31:19.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.939 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:19.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.940 --rc genhtml_branch_coverage=1 00:31:19.940 --rc genhtml_function_coverage=1 00:31:19.940 --rc genhtml_legend=1 00:31:19.940 --rc geninfo_all_blocks=1 00:31:19.940 --rc geninfo_unexecuted_blocks=1 00:31:19.940 00:31:19.940 ' 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:19.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.940 --rc genhtml_branch_coverage=1 00:31:19.940 --rc genhtml_function_coverage=1 00:31:19.940 --rc genhtml_legend=1 00:31:19.940 --rc geninfo_all_blocks=1 00:31:19.940 --rc geninfo_unexecuted_blocks=1 00:31:19.940 00:31:19.940 ' 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:19.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.940 --rc genhtml_branch_coverage=1 00:31:19.940 --rc genhtml_function_coverage=1 00:31:19.940 --rc genhtml_legend=1 00:31:19.940 --rc geninfo_all_blocks=1 00:31:19.940 --rc geninfo_unexecuted_blocks=1 00:31:19.940 00:31:19.940 ' 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:19.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.940 --rc genhtml_branch_coverage=1 00:31:19.940 --rc genhtml_function_coverage=1 00:31:19.940 --rc genhtml_legend=1 00:31:19.940 --rc geninfo_all_blocks=1 00:31:19.940 --rc geninfo_unexecuted_blocks=1 00:31:19.940 00:31:19.940 ' 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:19.940 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.941 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:19.941 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:19.941 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:19.941 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.941 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.941 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.941 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:19.941 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:19.941 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:19.941 07:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.207 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:25.207 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:25.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:25.208 Found net devices under 0000:86:00.0: cvl_0_0 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.208 07:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:25.208 Found net devices under 0000:86:00.1: cvl_0_1 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:25.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:31:25.208 00:31:25.208 --- 10.0.0.2 ping statistics --- 00:31:25.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.208 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:31:25.208 00:31:25.208 --- 10.0.0.1 ping statistics --- 00:31:25.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.208 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.208 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=938926 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 938926 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 938926 ']' 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:25.209 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.467 [2024-11-26 07:40:53.310545] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:25.467 [2024-11-26 07:40:53.311500] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:31:25.467 [2024-11-26 07:40:53.311536] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:25.467 [2024-11-26 07:40:53.376378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:25.467 [2024-11-26 07:40:53.420423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.467 [2024-11-26 07:40:53.420464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.467 [2024-11-26 07:40:53.420471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.468 [2024-11-26 07:40:53.420477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.468 [2024-11-26 07:40:53.420482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.468 [2024-11-26 07:40:53.422048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.468 [2024-11-26 07:40:53.422146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.468 [2024-11-26 07:40:53.422234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:25.468 [2024-11-26 07:40:53.422236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.468 [2024-11-26 07:40:53.422540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.468 [2024-11-26 07:40:53.556103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:25.468 [2024-11-26 07:40:53.556204] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:25.468 [2024-11-26 07:40:53.556816] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:25.468 [2024-11-26 07:40:53.557283] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.468 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.727 [2024-11-26 07:40:53.566936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.727 Malloc0 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.727 [2024-11-26 07:40:53.622880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=939023 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=939025 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.727 { 00:31:25.727 "params": { 00:31:25.727 "name": "Nvme$subsystem", 00:31:25.727 "trtype": "$TEST_TRANSPORT", 00:31:25.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.727 "adrfam": "ipv4", 00:31:25.727 "trsvcid": "$NVMF_PORT", 00:31:25.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.727 "hdgst": ${hdgst:-false}, 00:31:25.727 "ddgst": ${ddgst:-false} 00:31:25.727 }, 00:31:25.727 "method": "bdev_nvme_attach_controller" 00:31:25.727 } 00:31:25.727 EOF 00:31:25.727 )") 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=939027 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:25.727 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.727 { 00:31:25.727 "params": { 00:31:25.727 "name": "Nvme$subsystem", 00:31:25.727 "trtype": "$TEST_TRANSPORT", 00:31:25.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.727 "adrfam": "ipv4", 00:31:25.727 "trsvcid": "$NVMF_PORT", 00:31:25.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.727 "hdgst": ${hdgst:-false}, 00:31:25.727 "ddgst": ${ddgst:-false} 00:31:25.727 }, 00:31:25.727 "method": "bdev_nvme_attach_controller" 00:31:25.727 } 00:31:25.727 EOF 00:31:25.727 )") 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=939030 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.728 { 00:31:25.728 "params": { 00:31:25.728 "name": "Nvme$subsystem", 00:31:25.728 "trtype": "$TEST_TRANSPORT", 00:31:25.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.728 "adrfam": "ipv4", 00:31:25.728 "trsvcid": "$NVMF_PORT", 00:31:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.728 "hdgst": ${hdgst:-false}, 00:31:25.728 "ddgst": ${ddgst:-false} 00:31:25.728 }, 00:31:25.728 "method": "bdev_nvme_attach_controller" 00:31:25.728 } 00:31:25.728 EOF 00:31:25.728 )") 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.728 { 00:31:25.728 "params": { 00:31:25.728 "name": "Nvme$subsystem", 00:31:25.728 "trtype": "$TEST_TRANSPORT", 00:31:25.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.728 "adrfam": "ipv4", 00:31:25.728 "trsvcid": "$NVMF_PORT", 00:31:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.728 "hdgst": ${hdgst:-false}, 00:31:25.728 "ddgst": ${ddgst:-false} 00:31:25.728 }, 00:31:25.728 "method": "bdev_nvme_attach_controller" 00:31:25.728 } 00:31:25.728 EOF 00:31:25.728 )") 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 939023 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:25.728 "params": { 00:31:25.728 "name": "Nvme1", 00:31:25.728 "trtype": "tcp", 00:31:25.728 "traddr": "10.0.0.2", 00:31:25.728 "adrfam": "ipv4", 00:31:25.728 "trsvcid": "4420", 00:31:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:25.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:25.728 "hdgst": false, 00:31:25.728 "ddgst": false 00:31:25.728 }, 00:31:25.728 "method": "bdev_nvme_attach_controller" 00:31:25.728 }' 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:25.728 "params": { 00:31:25.728 "name": "Nvme1", 00:31:25.728 "trtype": "tcp", 00:31:25.728 "traddr": "10.0.0.2", 00:31:25.728 "adrfam": "ipv4", 00:31:25.728 "trsvcid": "4420", 00:31:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:25.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:25.728 "hdgst": false, 00:31:25.728 "ddgst": false 00:31:25.728 }, 00:31:25.728 "method": "bdev_nvme_attach_controller" 00:31:25.728 }' 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:25.728 "params": { 00:31:25.728 "name": "Nvme1", 00:31:25.728 "trtype": "tcp", 00:31:25.728 "traddr": "10.0.0.2", 00:31:25.728 "adrfam": "ipv4", 00:31:25.728 "trsvcid": "4420", 00:31:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:25.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:25.728 "hdgst": false, 00:31:25.728 "ddgst": false 00:31:25.728 }, 00:31:25.728 "method": "bdev_nvme_attach_controller" 00:31:25.728 }' 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:25.728 07:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:25.728 "params": { 00:31:25.728 "name": "Nvme1", 00:31:25.728 "trtype": "tcp", 00:31:25.728 "traddr": "10.0.0.2", 00:31:25.728 "adrfam": "ipv4", 00:31:25.728 "trsvcid": "4420", 00:31:25.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:25.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:25.728 "hdgst": false, 00:31:25.728 "ddgst": false 00:31:25.728 }, 00:31:25.728 "method": "bdev_nvme_attach_controller" 00:31:25.728 }' 00:31:25.728 [2024-11-26 07:40:53.676073] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:31:25.728 [2024-11-26 07:40:53.676127] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:25.728 [2024-11-26 07:40:53.676546] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:31:25.728 [2024-11-26 07:40:53.676547] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:31:25.728 [2024-11-26 07:40:53.676590] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:25.728 [2024-11-26 07:40:53.676592] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:25.728 [2024-11-26 07:40:53.678463] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:31:25.728 [2024-11-26 07:40:53.678511] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:25.987 [2024-11-26 07:40:53.870881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.987 [2024-11-26 07:40:53.915161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:25.987 [2024-11-26 07:40:53.963728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.987 [2024-11-26 07:40:54.018211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:25.987 [2024-11-26 07:40:54.023264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.987 [2024-11-26 07:40:54.066120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:26.245 [2024-11-26 07:40:54.083082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.245 [2024-11-26 07:40:54.125784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:26.245 Running I/O for 1 seconds... 00:31:26.245 Running I/O for 1 seconds... 00:31:26.245 Running I/O for 1 seconds... 00:31:26.245 Running I/O for 1 seconds... 00:31:27.183 238080.00 IOPS, 930.00 MiB/s 00:31:27.183 Latency(us) 00:31:27.183 [2024-11-26T06:40:55.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.183 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:27.183 Nvme1n1 : 1.00 237709.57 928.55 0.00 0.00 535.20 229.73 1538.67 00:31:27.183 [2024-11-26T06:40:55.283Z] =================================================================================================================== 00:31:27.183 [2024-11-26T06:40:55.283Z] Total : 237709.57 928.55 0.00 0.00 535.20 229.73 1538.67 00:31:27.183 7804.00 IOPS, 30.48 MiB/s 00:31:27.183 Latency(us) 00:31:27.183 [2024-11-26T06:40:55.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.183 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:27.183 Nvme1n1 : 1.02 7818.07 30.54 0.00 0.00 16244.20 1488.81 23365.01 00:31:27.183 [2024-11-26T06:40:55.283Z] =================================================================================================================== 00:31:27.183 [2024-11-26T06:40:55.283Z] Total : 7818.07 30.54 0.00 0.00 16244.20 1488.81 23365.01 00:31:27.183 13446.00 IOPS, 52.52 MiB/s 00:31:27.183 Latency(us) 00:31:27.183 [2024-11-26T06:40:55.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.183 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:27.183 Nvme1n1 : 1.01 13508.44 52.77 0.00 0.00 9447.34 2094.30 14075.99 00:31:27.183 [2024-11-26T06:40:55.283Z] =================================================================================================================== 00:31:27.183 [2024-11-26T06:40:55.283Z] Total : 13508.44 52.77 0.00 0.00 9447.34 2094.30 14075.99 00:31:27.442 7707.00 IOPS, 30.11 MiB/s 00:31:27.442 Latency(us) 00:31:27.442 [2024-11-26T06:40:55.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.442 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:27.442 Nvme1n1 : 1.00 7804.95 30.49 0.00 0.00 16366.30 3148.58 33052.94 00:31:27.442 [2024-11-26T06:40:55.542Z] =================================================================================================================== 00:31:27.442 [2024-11-26T06:40:55.542Z] Total : 7804.95 30.49 0.00 0.00 16366.30 3148.58 33052.94 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 939025 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 939027 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 939030 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:27.442 rmmod nvme_tcp 00:31:27.442 rmmod nvme_fabrics 00:31:27.442 rmmod nvme_keyring 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 938926 ']' 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 938926 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 938926 ']' 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 938926 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:27.442 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 938926 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 938926' 00:31:27.702 killing process with pid 938926 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 938926 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 938926 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.702 07:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:30.237 00:31:30.237 real 0m10.020s 00:31:30.237 user 0m14.509s 00:31:30.237 sys 0m5.859s 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:30.237 ************************************ 00:31:30.237 END TEST nvmf_bdev_io_wait 00:31:30.237 ************************************ 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:30.237 ************************************ 00:31:30.237 START TEST nvmf_queue_depth 00:31:30.237 ************************************ 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:30.237 * Looking for test storage... 00:31:30.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:30.237 07:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:30.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.237 --rc genhtml_branch_coverage=1 00:31:30.237 --rc genhtml_function_coverage=1 00:31:30.237 --rc genhtml_legend=1 00:31:30.237 --rc geninfo_all_blocks=1 00:31:30.237 --rc geninfo_unexecuted_blocks=1 00:31:30.237 00:31:30.237 ' 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:30.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.237 --rc genhtml_branch_coverage=1 00:31:30.237 --rc genhtml_function_coverage=1 00:31:30.237 --rc genhtml_legend=1 00:31:30.237 --rc geninfo_all_blocks=1 00:31:30.237 --rc geninfo_unexecuted_blocks=1 00:31:30.237 00:31:30.237 ' 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:30.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.237 --rc genhtml_branch_coverage=1 00:31:30.237 --rc genhtml_function_coverage=1 00:31:30.237 --rc genhtml_legend=1 00:31:30.237 --rc geninfo_all_blocks=1 00:31:30.237 --rc geninfo_unexecuted_blocks=1 00:31:30.237 00:31:30.237 ' 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:30.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.237 --rc genhtml_branch_coverage=1 00:31:30.237 --rc genhtml_function_coverage=1 00:31:30.237 --rc genhtml_legend=1 00:31:30.237 --rc geninfo_all_blocks=1 00:31:30.237 --rc geninfo_unexecuted_blocks=1 00:31:30.237 00:31:30.237 ' 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.237 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:30.238 07:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:35.512 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:35.512 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.512 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:35.513 Found net devices under 0000:86:00.0: cvl_0_0 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:35.513 Found net devices under 0000:86:00.1: cvl_0_1 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:31:35.513 00:31:35.513 --- 10.0.0.2 ping statistics --- 00:31:35.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.513 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:31:35.513 00:31:35.513 --- 10.0.0.1 ping statistics --- 00:31:35.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.513 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:35.513 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=942800 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 942800 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 942800 ']' 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.772 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.772 [2024-11-26 07:41:03.689261] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:35.772 [2024-11-26 07:41:03.690189] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:31:35.772 [2024-11-26 07:41:03.690221] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.772 [2024-11-26 07:41:03.760692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.772 [2024-11-26 07:41:03.802918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.772 [2024-11-26 07:41:03.802958] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.772 [2024-11-26 07:41:03.802965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.772 [2024-11-26 07:41:03.802972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.772 [2024-11-26 07:41:03.802977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.772 [2024-11-26 07:41:03.803530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.031 [2024-11-26 07:41:03.870767] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.031 [2024-11-26 07:41:03.870996] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:36.031 [2024-11-26 07:41:03.936202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:36.031 Malloc0 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:36.031 [2024-11-26 07:41:03.992090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=942823 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 942823 /var/tmp/bdevperf.sock 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 942823 ']' 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.031 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:36.032 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:36.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:36.032 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.032 07:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:36.032 [2024-11-26 07:41:04.043054] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:31:36.032 [2024-11-26 07:41:04.043096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942823 ] 00:31:36.032 [2024-11-26 07:41:04.104632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.291 [2024-11-26 07:41:04.146346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.291 07:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.291 07:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:36.291 07:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:36.291 07:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.291 07:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:36.291 NVMe0n1 00:31:36.291 07:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.291 07:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:36.550 Running I/O for 10 seconds... 00:31:38.425 11923.00 IOPS, 46.57 MiB/s [2024-11-26T06:41:07.462Z] 12048.50 IOPS, 47.06 MiB/s [2024-11-26T06:41:08.839Z] 12116.00 IOPS, 47.33 MiB/s [2024-11-26T06:41:09.776Z] 12165.00 IOPS, 47.52 MiB/s [2024-11-26T06:41:10.712Z] 12218.60 IOPS, 47.73 MiB/s [2024-11-26T06:41:11.649Z] 12208.00 IOPS, 47.69 MiB/s [2024-11-26T06:41:12.584Z] 12247.29 IOPS, 47.84 MiB/s [2024-11-26T06:41:13.520Z] 12266.62 IOPS, 47.92 MiB/s [2024-11-26T06:41:14.458Z] 12244.56 IOPS, 47.83 MiB/s [2024-11-26T06:41:14.717Z] 12256.70 IOPS, 47.88 MiB/s 00:31:46.617 Latency(us) 00:31:46.617 [2024-11-26T06:41:14.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.617 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:46.617 Verification LBA range: start 0x0 length 0x4000 00:31:46.617 NVMe0n1 : 10.06 12269.66 47.93 0.00 0.00 83152.16 19147.91 52200.85 00:31:46.617 [2024-11-26T06:41:14.717Z] =================================================================================================================== 00:31:46.617 [2024-11-26T06:41:14.717Z] Total : 12269.66 47.93 0.00 0.00 83152.16 19147.91 52200.85 00:31:46.617 { 00:31:46.617 "results": [ 00:31:46.617 { 00:31:46.617 "job": "NVMe0n1", 00:31:46.617 "core_mask": "0x1", 00:31:46.617 "workload": "verify", 00:31:46.617 "status": "finished", 00:31:46.617 "verify_range": { 00:31:46.617 "start": 0, 00:31:46.617 "length": 16384 00:31:46.617 }, 00:31:46.617 "queue_depth": 1024, 00:31:46.617 "io_size": 4096, 00:31:46.617 "runtime": 10.06246, 00:31:46.617 "iops": 12269.663680650656, 00:31:46.617 "mibps": 47.928373752541624, 00:31:46.617 "io_failed": 0, 00:31:46.617 "io_timeout": 0, 00:31:46.617 "avg_latency_us": 83152.15731769666, 00:31:46.617 "min_latency_us": 19147.909565217393, 00:31:46.617 "max_latency_us": 52200.848695652174 00:31:46.617 } 00:31:46.617 ], 00:31:46.617 "core_count": 1 00:31:46.617 } 00:31:46.617 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 942823 00:31:46.617 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 942823 ']' 00:31:46.617 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 942823 00:31:46.617 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:46.617 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.618 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942823 00:31:46.618 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:46.618 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:46.618 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942823' 00:31:46.618 killing process with pid 942823 00:31:46.618 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 942823 00:31:46.618 Received shutdown signal, test time was about 10.000000 seconds 00:31:46.618 00:31:46.618 Latency(us) 00:31:46.618 [2024-11-26T06:41:14.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.618 [2024-11-26T06:41:14.718Z] =================================================================================================================== 00:31:46.618 [2024-11-26T06:41:14.718Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:46.618 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 942823 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:46.877 rmmod nvme_tcp 00:31:46.877 rmmod nvme_fabrics 00:31:46.877 rmmod nvme_keyring 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 942800 ']' 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 942800 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 942800 ']' 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 942800 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942800 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942800' 00:31:46.877 killing process with pid 942800 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 942800 00:31:46.877 07:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 942800 00:31:47.137 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.137 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.137 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.138 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:47.138 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:47.138 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.138 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.138 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.138 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.138 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.138 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.138 07:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.043 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.043 00:31:49.043 real 0m19.235s 00:31:49.043 user 0m22.436s 00:31:49.043 sys 0m6.025s 00:31:49.043 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.043 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:49.043 ************************************ 00:31:49.043 END TEST nvmf_queue_depth 00:31:49.043 ************************************ 00:31:49.043 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:49.043 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:49.043 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.043 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.303 ************************************ 00:31:49.303 START TEST nvmf_target_multipath 00:31:49.303 ************************************ 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:49.303 * Looking for test storage... 00:31:49.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.303 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:49.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.304 --rc genhtml_branch_coverage=1 00:31:49.304 --rc genhtml_function_coverage=1 00:31:49.304 --rc genhtml_legend=1 00:31:49.304 --rc geninfo_all_blocks=1 00:31:49.304 --rc geninfo_unexecuted_blocks=1 00:31:49.304 00:31:49.304 ' 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:49.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.304 --rc genhtml_branch_coverage=1 00:31:49.304 --rc genhtml_function_coverage=1 00:31:49.304 --rc genhtml_legend=1 00:31:49.304 --rc geninfo_all_blocks=1 00:31:49.304 --rc geninfo_unexecuted_blocks=1 00:31:49.304 00:31:49.304 ' 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:49.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.304 --rc genhtml_branch_coverage=1 00:31:49.304 --rc genhtml_function_coverage=1 00:31:49.304 --rc genhtml_legend=1 00:31:49.304 --rc geninfo_all_blocks=1 00:31:49.304 --rc geninfo_unexecuted_blocks=1 00:31:49.304 00:31:49.304 ' 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:49.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.304 --rc genhtml_branch_coverage=1 00:31:49.304 --rc genhtml_function_coverage=1 00:31:49.304 --rc genhtml_legend=1 00:31:49.304 --rc geninfo_all_blocks=1 00:31:49.304 --rc geninfo_unexecuted_blocks=1 00:31:49.304 00:31:49.304 ' 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.304 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.305 07:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:55.983 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.983 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:55.984 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:55.984 Found net devices under 0000:86:00.0: cvl_0_0 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:55.984 Found net devices under 0000:86:00.1: cvl_0_1 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:31:55.984 00:31:55.984 --- 10.0.0.2 ping statistics --- 00:31:55.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.984 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:31:55.984 00:31:55.984 --- 10.0.0.1 ping statistics --- 00:31:55.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.984 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:55.984 07:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:55.984 only one NIC for nvmf test 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.984 rmmod nvme_tcp 00:31:55.984 rmmod nvme_fabrics 00:31:55.984 rmmod nvme_keyring 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:55.984 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:55.985 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:55.985 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:55.985 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:55.985 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.985 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:55.985 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.985 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.985 07:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:57.443 00:31:57.443 real 0m8.020s 00:31:57.443 user 0m1.733s 00:31:57.443 sys 0m4.325s 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:57.443 ************************************ 00:31:57.443 END TEST nvmf_target_multipath 00:31:57.443 ************************************ 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:57.443 ************************************ 00:31:57.443 START TEST nvmf_zcopy 00:31:57.443 ************************************ 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:57.443 * Looking for test storage... 00:31:57.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:57.443 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:57.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.444 --rc genhtml_branch_coverage=1 00:31:57.444 --rc genhtml_function_coverage=1 00:31:57.444 --rc genhtml_legend=1 00:31:57.444 --rc geninfo_all_blocks=1 00:31:57.444 --rc geninfo_unexecuted_blocks=1 00:31:57.444 00:31:57.444 ' 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:57.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.444 --rc genhtml_branch_coverage=1 00:31:57.444 --rc genhtml_function_coverage=1 00:31:57.444 --rc genhtml_legend=1 00:31:57.444 --rc geninfo_all_blocks=1 00:31:57.444 --rc geninfo_unexecuted_blocks=1 00:31:57.444 00:31:57.444 ' 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:57.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.444 --rc genhtml_branch_coverage=1 00:31:57.444 --rc genhtml_function_coverage=1 00:31:57.444 --rc genhtml_legend=1 00:31:57.444 --rc geninfo_all_blocks=1 00:31:57.444 --rc geninfo_unexecuted_blocks=1 00:31:57.444 00:31:57.444 ' 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:57.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:57.444 --rc genhtml_branch_coverage=1 00:31:57.444 --rc genhtml_function_coverage=1 00:31:57.444 --rc genhtml_legend=1 00:31:57.444 --rc geninfo_all_blocks=1 00:31:57.444 --rc geninfo_unexecuted_blocks=1 00:31:57.444 00:31:57.444 ' 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:57.444 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:57.445 07:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:02.719 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:02.719 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:02.719 Found net devices under 0000:86:00.0: cvl_0_0 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.719 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:02.719 Found net devices under 0000:86:00.1: cvl_0_1 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:02.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:32:02.720 00:32:02.720 --- 10.0.0.2 ping statistics --- 00:32:02.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.720 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:32:02.720 00:32:02.720 --- 10.0.0.1 ping statistics --- 00:32:02.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.720 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=951461 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 951461 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 951461 ']' 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.720 07:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:02.980 [2024-11-26 07:41:30.826627] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:02.980 [2024-11-26 07:41:30.827581] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:32:02.980 [2024-11-26 07:41:30.827619] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.980 [2024-11-26 07:41:30.894521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.980 [2024-11-26 07:41:30.935591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.980 [2024-11-26 07:41:30.935626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.980 [2024-11-26 07:41:30.935633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.980 [2024-11-26 07:41:30.935639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.980 [2024-11-26 07:41:30.935644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.980 [2024-11-26 07:41:30.936194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.980 [2024-11-26 07:41:31.003022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:02.980 [2024-11-26 07:41:31.003238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:02.980 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.980 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:02.980 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:02.980 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:02.980 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:02.980 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.980 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:02.980 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:02.980 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.980 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:02.980 [2024-11-26 07:41:31.060780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.981 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.981 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:02.981 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.981 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:03.240 [2024-11-26 07:41:31.085037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:03.240 malloc0 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:03.240 { 00:32:03.240 "params": { 00:32:03.240 "name": "Nvme$subsystem", 00:32:03.240 "trtype": "$TEST_TRANSPORT", 00:32:03.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.240 "adrfam": "ipv4", 00:32:03.240 "trsvcid": "$NVMF_PORT", 00:32:03.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.240 "hdgst": ${hdgst:-false}, 00:32:03.240 "ddgst": ${ddgst:-false} 00:32:03.240 }, 00:32:03.240 "method": "bdev_nvme_attach_controller" 00:32:03.240 } 00:32:03.240 EOF 00:32:03.240 )") 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:03.240 07:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:03.240 "params": { 00:32:03.240 "name": "Nvme1", 00:32:03.240 "trtype": "tcp", 00:32:03.240 "traddr": "10.0.0.2", 00:32:03.240 "adrfam": "ipv4", 00:32:03.240 "trsvcid": "4420", 00:32:03.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:03.240 "hdgst": false, 00:32:03.240 "ddgst": false 00:32:03.240 }, 00:32:03.240 "method": "bdev_nvme_attach_controller" 00:32:03.240 }' 00:32:03.240 [2024-11-26 07:41:31.178457] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:32:03.240 [2024-11-26 07:41:31.178506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951484 ] 00:32:03.240 [2024-11-26 07:41:31.241755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.240 [2024-11-26 07:41:31.282914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.809 Running I/O for 10 seconds... 00:32:05.683 8237.00 IOPS, 64.35 MiB/s [2024-11-26T06:41:34.720Z] 8331.00 IOPS, 65.09 MiB/s [2024-11-26T06:41:36.098Z] 8360.00 IOPS, 65.31 MiB/s [2024-11-26T06:41:36.663Z] 8375.25 IOPS, 65.43 MiB/s [2024-11-26T06:41:38.036Z] 8380.00 IOPS, 65.47 MiB/s [2024-11-26T06:41:38.969Z] 8388.83 IOPS, 65.54 MiB/s [2024-11-26T06:41:39.903Z] 8393.00 IOPS, 65.57 MiB/s [2024-11-26T06:41:40.840Z] 8402.62 IOPS, 65.65 MiB/s [2024-11-26T06:41:41.778Z] 8399.22 IOPS, 65.62 MiB/s [2024-11-26T06:41:41.778Z] 8393.90 IOPS, 65.58 MiB/s 00:32:13.678 Latency(us) 00:32:13.678 [2024-11-26T06:41:41.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.678 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:13.678 Verification LBA range: start 0x0 length 0x1000 00:32:13.678 Nvme1n1 : 10.05 8362.93 65.34 0.00 0.00 15207.32 3034.60 44222.55 00:32:13.678 [2024-11-26T06:41:41.778Z] =================================================================================================================== 00:32:13.678 [2024-11-26T06:41:41.778Z] Total : 8362.93 65.34 0.00 0.00 15207.32 3034.60 44222.55 00:32:13.937 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=953144 00:32:13.937 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:13.937 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:13.937 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:13.937 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:13.937 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:13.937 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:13.937 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:13.937 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:13.937 { 00:32:13.937 "params": { 00:32:13.937 "name": "Nvme$subsystem", 00:32:13.937 "trtype": "$TEST_TRANSPORT", 00:32:13.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.938 "adrfam": "ipv4", 00:32:13.938 "trsvcid": "$NVMF_PORT", 00:32:13.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.938 "hdgst": ${hdgst:-false}, 00:32:13.938 "ddgst": ${ddgst:-false} 00:32:13.938 }, 00:32:13.938 "method": "bdev_nvme_attach_controller" 00:32:13.938 } 00:32:13.938 EOF 00:32:13.938 )") 00:32:13.938 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:13.938 [2024-11-26 07:41:41.880533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:41.880566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:13.938 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:13.938 07:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:13.938 "params": { 00:32:13.938 "name": "Nvme1", 00:32:13.938 "trtype": "tcp", 00:32:13.938 "traddr": "10.0.0.2", 00:32:13.938 "adrfam": "ipv4", 00:32:13.938 "trsvcid": "4420", 00:32:13.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.938 "hdgst": false, 00:32:13.938 "ddgst": false 00:32:13.938 }, 00:32:13.938 "method": "bdev_nvme_attach_controller" 00:32:13.938 }' 00:32:13.938 [2024-11-26 07:41:41.892502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:41.892521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:41.904498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:41.904511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:41.916499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:41.916509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:41.923246] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:32:13.938 [2024-11-26 07:41:41.923290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953144 ] 00:32:13.938 [2024-11-26 07:41:41.928502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:41.928515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:41.940494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:41.940504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:41.952498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:41.952509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:41.964501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:41.964517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:41.976496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:41.976506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:41.985093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.938 [2024-11-26 07:41:41.988500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:41.988510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:42.000498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:42.000512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:42.012496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:42.012507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:42.024501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.938 [2024-11-26 07:41:42.024524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.938 [2024-11-26 07:41:42.027317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.197 [2024-11-26 07:41:42.036504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.036518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.048508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.048528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.060501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.060517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.072500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.072515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.084501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.084512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.096498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.096510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.108817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.108835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.120505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.120523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.132503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.132518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.144502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.144516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.156499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.156510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.168500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.168511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.180499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.180513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.192500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.192514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.204496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.204506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.216495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.216505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.228498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.228508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.240499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.240514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.252496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.252505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.264496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.264506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.276496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.276508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.197 [2024-11-26 07:41:42.288500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.197 [2024-11-26 07:41:42.288512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.300495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.300504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.312495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.312508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.324496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.324509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.336506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.336524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 Running I/O for 5 seconds... 00:32:14.459 [2024-11-26 07:41:42.354670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.354691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.369842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.369861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.384764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.384784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.395545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.395565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.410813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.410833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.425817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.425837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.440560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.440583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.453333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.453354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.466228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.466249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.482074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.482095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.496751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.496770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.509146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.509167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.524210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.524230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.537482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.537502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.459 [2024-11-26 07:41:42.549199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.459 [2024-11-26 07:41:42.549218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.562210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.717 [2024-11-26 07:41:42.562231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.577768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.717 [2024-11-26 07:41:42.577792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.592757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.717 [2024-11-26 07:41:42.592776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.604764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.717 [2024-11-26 07:41:42.604782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.618377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.717 [2024-11-26 07:41:42.618396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.633936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.717 [2024-11-26 07:41:42.633961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.649002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.717 [2024-11-26 07:41:42.649021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.661982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.717 [2024-11-26 07:41:42.662003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.677485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.717 [2024-11-26 07:41:42.677505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.692708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.717 [2024-11-26 07:41:42.692728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.704750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.717 [2024-11-26 07:41:42.704769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.717 [2024-11-26 07:41:42.720735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.718 [2024-11-26 07:41:42.720755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.718 [2024-11-26 07:41:42.732736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.718 [2024-11-26 07:41:42.732755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.718 [2024-11-26 07:41:42.746013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.718 [2024-11-26 07:41:42.746034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.718 [2024-11-26 07:41:42.760749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.718 [2024-11-26 07:41:42.760770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.718 [2024-11-26 07:41:42.771576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.718 [2024-11-26 07:41:42.771596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.718 [2024-11-26 07:41:42.786065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.718 [2024-11-26 07:41:42.786085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.718 [2024-11-26 07:41:42.801462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.718 [2024-11-26 07:41:42.801481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.816569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.816589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.829516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.829536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.844378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.844403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.857208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.857228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.869986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.870006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.880584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.880603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.894259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.894280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.909323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.909344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.924364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.924383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.938073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.938093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.952957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.952976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.963577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.963596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.978824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.978844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:42.993806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:42.993825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:43.008521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:43.008540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:43.022206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:43.022225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:43.036754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:43.036773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:43.049184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:43.049203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.977 [2024-11-26 07:41:43.064707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.977 [2024-11-26 07:41:43.064726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.078131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.078151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.093234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.093253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.108649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.108668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.121458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.121477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.136613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.136632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.150376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.150402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.165702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.165722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.180384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.180413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.192607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.192628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.206304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.206323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.221092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.221112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.236257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.236278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.250972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.250992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.266417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.266437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.281280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.281299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.296600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.296620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.309138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.309157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.236 [2024-11-26 07:41:43.322624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.236 [2024-11-26 07:41:43.322644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.338054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.338074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 16357.00 IOPS, 127.79 MiB/s [2024-11-26T06:41:43.596Z] [2024-11-26 07:41:43.352747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.352767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.365131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.365150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.377990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.378009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.393425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.393445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.408975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.408995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.424173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.424193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.435894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.435915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.450186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.450205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.465014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.465033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.480328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.480349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.494630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.494650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.509573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.509592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.521062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.521081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.534134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.534154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.549340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.549360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.564355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.564375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.496 [2024-11-26 07:41:43.575828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.496 [2024-11-26 07:41:43.575847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.590498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.590519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.605513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.605532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.620418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.620439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.632842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.632861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.646239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.646258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.661415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.661434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.676389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.676408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.690705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.690724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.705963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.705983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.721054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.721075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.733572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.733591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.744786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.744805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.755 [2024-11-26 07:41:43.758593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.755 [2024-11-26 07:41:43.758612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.756 [2024-11-26 07:41:43.773834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.756 [2024-11-26 07:41:43.773853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.756 [2024-11-26 07:41:43.788696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.756 [2024-11-26 07:41:43.788715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.756 [2024-11-26 07:41:43.800265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.756 [2024-11-26 07:41:43.800285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.756 [2024-11-26 07:41:43.814688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.756 [2024-11-26 07:41:43.814709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.756 [2024-11-26 07:41:43.829605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.756 [2024-11-26 07:41:43.829625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.756 [2024-11-26 07:41:43.844421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.756 [2024-11-26 07:41:43.844442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.014 [2024-11-26 07:41:43.858713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.014 [2024-11-26 07:41:43.858732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:43.873763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:43.873783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:43.888672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:43.888691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:43.899432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:43.899456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:43.914505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:43.914525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:43.929301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:43.929326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:43.942066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:43.942085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:43.957311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:43.957330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:43.972814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:43.972833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:43.988862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:43.988881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:44.004793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:44.004811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:44.021231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:44.021252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:44.032953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:44.032972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:44.046382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:44.046401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:44.061672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:44.061691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:44.076699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:44.076719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:44.087487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:44.087506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.015 [2024-11-26 07:41:44.103075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.015 [2024-11-26 07:41:44.103095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.118135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.118155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.133275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.133294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.144599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.144619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.158120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.158139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.173402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.173426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.188544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.188563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.201559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.201578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.217026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.217046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.232301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.232321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.245309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.245327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.256793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.256811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.270105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.270125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.273 [2024-11-26 07:41:44.285196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.273 [2024-11-26 07:41:44.285216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.274 [2024-11-26 07:41:44.300821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.274 [2024-11-26 07:41:44.300840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.274 [2024-11-26 07:41:44.316922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.274 [2024-11-26 07:41:44.316942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.274 [2024-11-26 07:41:44.333077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.274 [2024-11-26 07:41:44.333097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.274 [2024-11-26 07:41:44.348971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.274 [2024-11-26 07:41:44.348991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.274 16365.00 IOPS, 127.85 MiB/s [2024-11-26T06:41:44.374Z] [2024-11-26 07:41:44.364571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.274 [2024-11-26 07:41:44.364591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.378515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.378534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.394073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.394092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.408680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.408700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.419983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.420002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.434751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.434771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.449835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.449858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.465096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.465115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.477476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.477495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.492444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.492464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.506526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.506546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.521608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.521628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.537129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.537149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.547839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.547858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.562639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.562659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.578233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.578253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.593167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.593186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.608458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.608477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.533 [2024-11-26 07:41:44.621209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.533 [2024-11-26 07:41:44.621227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.791 [2024-11-26 07:41:44.636972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.791 [2024-11-26 07:41:44.636993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.791 [2024-11-26 07:41:44.652501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.791 [2024-11-26 07:41:44.652521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.791 [2024-11-26 07:41:44.665739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.791 [2024-11-26 07:41:44.665760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.791 [2024-11-26 07:41:44.677280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.791 [2024-11-26 07:41:44.677300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.791 [2024-11-26 07:41:44.690140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.791 [2024-11-26 07:41:44.690161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.791 [2024-11-26 07:41:44.705829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.791 [2024-11-26 07:41:44.705849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.791 [2024-11-26 07:41:44.721078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.791 [2024-11-26 07:41:44.721097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.791 [2024-11-26 07:41:44.733251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.791 [2024-11-26 07:41:44.733271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.791 [2024-11-26 07:41:44.745915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.791 [2024-11-26 07:41:44.745935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.792 [2024-11-26 07:41:44.760933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.792 [2024-11-26 07:41:44.760967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.792 [2024-11-26 07:41:44.776756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.792 [2024-11-26 07:41:44.776776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.792 [2024-11-26 07:41:44.786895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.792 [2024-11-26 07:41:44.786915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.792 [2024-11-26 07:41:44.802267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.792 [2024-11-26 07:41:44.802288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.792 [2024-11-26 07:41:44.817220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.792 [2024-11-26 07:41:44.817239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.792 [2024-11-26 07:41:44.833331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.792 [2024-11-26 07:41:44.833351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.792 [2024-11-26 07:41:44.848789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.792 [2024-11-26 07:41:44.848815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.792 [2024-11-26 07:41:44.864536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.792 [2024-11-26 07:41:44.864556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.792 [2024-11-26 07:41:44.878444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.792 [2024-11-26 07:41:44.878468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:44.893586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:44.893607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:44.904010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:44.904030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:44.918446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:44.918465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:44.933745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:44.933765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:44.948959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:44.948978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:44.964285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:44.964305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:44.977461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:44.977481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:44.993005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:44.993024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:45.008332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:45.008352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:45.021311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:45.021331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:45.034149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:45.034169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:45.049332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:45.049352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:45.063971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:45.063993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:45.078363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:45.078384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:45.093576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:45.093596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:45.108365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.050 [2024-11-26 07:41:45.108385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.050 [2024-11-26 07:41:45.122451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.051 [2024-11-26 07:41:45.122471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.051 [2024-11-26 07:41:45.137306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.051 [2024-11-26 07:41:45.137326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.309 [2024-11-26 07:41:45.152418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.309 [2024-11-26 07:41:45.152439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.309 [2024-11-26 07:41:45.165796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.309 [2024-11-26 07:41:45.165815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.309 [2024-11-26 07:41:45.180582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.309 [2024-11-26 07:41:45.180602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.309 [2024-11-26 07:41:45.194418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.309 [2024-11-26 07:41:45.194438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.309 [2024-11-26 07:41:45.209748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.309 [2024-11-26 07:41:45.209767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.309 [2024-11-26 07:41:45.224794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.309 [2024-11-26 07:41:45.224813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.309 [2024-11-26 07:41:45.236148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.309 [2024-11-26 07:41:45.236166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.309 [2024-11-26 07:41:45.250523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.250542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.310 [2024-11-26 07:41:45.265484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.265504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.310 [2024-11-26 07:41:45.280126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.280146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.310 [2024-11-26 07:41:45.293893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.293912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.310 [2024-11-26 07:41:45.308536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.308555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.310 [2024-11-26 07:41:45.320674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.320694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.310 [2024-11-26 07:41:45.334766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.334786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.310 [2024-11-26 07:41:45.349382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.349401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.310 16366.00 IOPS, 127.86 MiB/s [2024-11-26T06:41:45.410Z] [2024-11-26 07:41:45.361013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.361033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.310 [2024-11-26 07:41:45.373854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.373873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.310 [2024-11-26 07:41:45.388829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.388848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.310 [2024-11-26 07:41:45.400130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.310 [2024-11-26 07:41:45.400149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.414136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.414155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.428982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.429000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.444891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.444910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.457391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.457409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.469858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.469877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.485047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.485066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.500496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.500516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.514007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.514031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.524520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.524540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.538184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.538205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.553323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.553343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.568646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.568666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.582051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.582071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.597214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.597234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.612232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.612252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.626185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.626204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.641757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.641776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.569 [2024-11-26 07:41:45.652157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.569 [2024-11-26 07:41:45.652176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.666845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.666865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.682238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.682259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.696770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.696789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.710596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.710616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.726276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.726296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.741354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.741372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.756068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.756087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.770486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.770505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.786017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.786043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.800609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.800628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.811216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.811234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.826234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.826254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.841347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.841366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.856275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.856295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.867600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.867618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.828 [2024-11-26 07:41:45.882890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.828 [2024-11-26 07:41:45.882911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.829 [2024-11-26 07:41:45.897997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.829 [2024-11-26 07:41:45.898016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.829 [2024-11-26 07:41:45.912828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.829 [2024-11-26 07:41:45.912846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:45.928713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:45.928733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:45.941742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:45.941761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:45.952314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:45.952333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:45.966546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:45.966566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:45.981390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:45.981409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:45.996489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:45.996510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.007765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.007786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.022335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.022354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.037466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.037485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.052466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.052489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.064764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.064783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.078838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.078858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.094128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.094149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.108932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.108960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.124454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.124475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.139109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.139130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.154141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.154161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.088 [2024-11-26 07:41:46.168887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.088 [2024-11-26 07:41:46.168906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.347 [2024-11-26 07:41:46.184541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.347 [2024-11-26 07:41:46.184561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.347 [2024-11-26 07:41:46.198622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.347 [2024-11-26 07:41:46.198642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.347 [2024-11-26 07:41:46.213625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.347 [2024-11-26 07:41:46.213645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.347 [2024-11-26 07:41:46.228281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.347 [2024-11-26 07:41:46.228301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.347 [2024-11-26 07:41:46.242872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.347 [2024-11-26 07:41:46.242892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.347 [2024-11-26 07:41:46.257898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.347 [2024-11-26 07:41:46.257918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.347 [2024-11-26 07:41:46.272756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.347 [2024-11-26 07:41:46.272776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.347 [2024-11-26 07:41:46.285003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.347 [2024-11-26 07:41:46.285023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.347 [2024-11-26 07:41:46.298498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.347 [2024-11-26 07:41:46.298518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.347 [2024-11-26 07:41:46.313756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.347 [2024-11-26 07:41:46.313776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.347 [2024-11-26 07:41:46.328650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.348 [2024-11-26 07:41:46.328680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.348 [2024-11-26 07:41:46.341388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.348 [2024-11-26 07:41:46.341407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.348 [2024-11-26 07:41:46.354097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.348 [2024-11-26 07:41:46.354117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.348 16386.25 IOPS, 128.02 MiB/s [2024-11-26T06:41:46.448Z] [2024-11-26 07:41:46.369227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.348 [2024-11-26 07:41:46.369247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.348 [2024-11-26 07:41:46.384324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.348 [2024-11-26 07:41:46.384345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.348 [2024-11-26 07:41:46.398210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.348 [2024-11-26 07:41:46.398229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.348 [2024-11-26 07:41:46.413152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.348 [2024-11-26 07:41:46.413171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.348 [2024-11-26 07:41:46.424284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.348 [2024-11-26 07:41:46.424303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.348 [2024-11-26 07:41:46.438780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.348 [2024-11-26 07:41:46.438799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.453639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.453658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.468594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.468614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.480679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.480699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.494346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.494365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.509723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.509744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.525038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.525058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.537382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.537401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.552677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.552697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.565043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.565063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.580728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.580748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.594397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.594416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.609655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.607 [2024-11-26 07:41:46.609674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.607 [2024-11-26 07:41:46.624705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.608 [2024-11-26 07:41:46.624724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.608 [2024-11-26 07:41:46.636210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.608 [2024-11-26 07:41:46.636228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.608 [2024-11-26 07:41:46.650751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.608 [2024-11-26 07:41:46.650770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.608 [2024-11-26 07:41:46.665639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.608 [2024-11-26 07:41:46.665658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.608 [2024-11-26 07:41:46.680632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.608 [2024-11-26 07:41:46.680651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.608 [2024-11-26 07:41:46.693886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.608 [2024-11-26 07:41:46.693905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.866 [2024-11-26 07:41:46.709229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.866 [2024-11-26 07:41:46.709247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.866 [2024-11-26 07:41:46.720742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.866 [2024-11-26 07:41:46.720763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.866 [2024-11-26 07:41:46.734185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.866 [2024-11-26 07:41:46.734205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.866 [2024-11-26 07:41:46.749234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.866 [2024-11-26 07:41:46.749253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.866 [2024-11-26 07:41:46.764967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.866 [2024-11-26 07:41:46.764986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.777481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.777500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.788803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.788822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.802429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.802448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.817204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.817224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.832761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.832780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.845368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.845387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.856575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.856594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.870711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.870730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.885974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.885993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.900798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.900817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.917236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.917256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.932283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.932304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.943582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.943600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.867 [2024-11-26 07:41:46.958995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.867 [2024-11-26 07:41:46.959015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.127 [2024-11-26 07:41:46.974041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.127 [2024-11-26 07:41:46.974060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.127 [2024-11-26 07:41:46.989597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.127 [2024-11-26 07:41:46.989616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.127 [2024-11-26 07:41:47.005796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.127 [2024-11-26 07:41:47.005816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.127 [2024-11-26 07:41:47.020507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.127 [2024-11-26 07:41:47.020527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.127 [2024-11-26 07:41:47.031294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.127 [2024-11-26 07:41:47.031313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.127 [2024-11-26 07:41:47.046368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.127 [2024-11-26 07:41:47.046388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.127 [2024-11-26 07:41:47.061174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.127 [2024-11-26 07:41:47.061193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.128 [2024-11-26 07:41:47.077055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.128 [2024-11-26 07:41:47.077075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.128 [2024-11-26 07:41:47.092352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.128 [2024-11-26 07:41:47.092372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.128 [2024-11-26 07:41:47.106927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.128 [2024-11-26 07:41:47.106951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.128 [2024-11-26 07:41:47.122296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.128 [2024-11-26 07:41:47.122322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.128 [2024-11-26 07:41:47.137834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.128 [2024-11-26 07:41:47.137855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.128 [2024-11-26 07:41:47.152833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.128 [2024-11-26 07:41:47.152852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.128 [2024-11-26 07:41:47.168838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.128 [2024-11-26 07:41:47.168857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.128 [2024-11-26 07:41:47.184186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.128 [2024-11-26 07:41:47.184206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.128 [2024-11-26 07:41:47.198670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.128 [2024-11-26 07:41:47.198689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.128 [2024-11-26 07:41:47.213861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.128 [2024-11-26 07:41:47.213880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 [2024-11-26 07:41:47.228808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.387 [2024-11-26 07:41:47.228827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 [2024-11-26 07:41:47.242379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.387 [2024-11-26 07:41:47.242399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 [2024-11-26 07:41:47.257643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.387 [2024-11-26 07:41:47.257663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 [2024-11-26 07:41:47.272309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.387 [2024-11-26 07:41:47.272329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 [2024-11-26 07:41:47.283720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.387 [2024-11-26 07:41:47.283739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 [2024-11-26 07:41:47.298153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.387 [2024-11-26 07:41:47.298172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 [2024-11-26 07:41:47.313151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.387 [2024-11-26 07:41:47.313171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 [2024-11-26 07:41:47.324705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.387 [2024-11-26 07:41:47.324723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 [2024-11-26 07:41:47.338381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.387 [2024-11-26 07:41:47.338400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 [2024-11-26 07:41:47.353772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.387 [2024-11-26 07:41:47.353791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 16390.20 IOPS, 128.05 MiB/s [2024-11-26T06:41:47.487Z] [2024-11-26 07:41:47.364505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.387 [2024-11-26 07:41:47.364524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.387 00:32:19.387 Latency(us) 00:32:19.387 [2024-11-26T06:41:47.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.387 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:19.388 Nvme1n1 : 5.01 16394.50 128.08 0.00 0.00 7800.18 1994.57 14303.94 00:32:19.388 [2024-11-26T06:41:47.488Z] =================================================================================================================== 00:32:19.388 [2024-11-26T06:41:47.488Z] Total : 16394.50 128.08 0.00 0.00 7800.18 1994.57 14303.94 00:32:19.388 [2024-11-26 07:41:47.376505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.388 [2024-11-26 07:41:47.376522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.388 [2024-11-26 07:41:47.388507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.388 [2024-11-26 07:41:47.388521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.388 [2024-11-26 07:41:47.400508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.388 [2024-11-26 07:41:47.400528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.388 [2024-11-26 07:41:47.412501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.388 [2024-11-26 07:41:47.412514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.388 [2024-11-26 07:41:47.424505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.388 [2024-11-26 07:41:47.424528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.388 [2024-11-26 07:41:47.436499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.388 [2024-11-26 07:41:47.436512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.388 [2024-11-26 07:41:47.448507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.388 [2024-11-26 07:41:47.448527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.388 [2024-11-26 07:41:47.460499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.388 [2024-11-26 07:41:47.460511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.388 [2024-11-26 07:41:47.472497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.388 [2024-11-26 07:41:47.472508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.647 [2024-11-26 07:41:47.484501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.647 [2024-11-26 07:41:47.484511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.647 [2024-11-26 07:41:47.496500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.647 [2024-11-26 07:41:47.496512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.647 [2024-11-26 07:41:47.508496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.647 [2024-11-26 07:41:47.508507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.647 [2024-11-26 07:41:47.520497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.647 [2024-11-26 07:41:47.520507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (953144) - No such process 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 953144 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:19.647 delay0 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.647 07:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:19.647 [2024-11-26 07:41:47.661403] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:26.214 Initializing NVMe Controllers 00:32:26.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:26.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:26.214 Initialization complete. Launching workers. 00:32:26.214 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5817 00:32:26.214 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6093, failed to submit 44 00:32:26.214 success 5947, unsuccessful 146, failed 0 00:32:26.214 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:26.214 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:26.214 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:26.214 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:26.214 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:26.214 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:26.214 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:26.214 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:26.214 rmmod nvme_tcp 00:32:26.472 rmmod nvme_fabrics 00:32:26.472 rmmod nvme_keyring 00:32:26.472 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:26.472 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:26.472 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 951461 ']' 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 951461 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 951461 ']' 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 951461 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 951461 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 951461' 00:32:26.473 killing process with pid 951461 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 951461 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 951461 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:26.473 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:26.732 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:26.732 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:26.732 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:26.732 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:26.732 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:26.732 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.732 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.732 07:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.637 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:28.637 00:32:28.637 real 0m31.393s 00:32:28.637 user 0m41.488s 00:32:28.637 sys 0m12.025s 00:32:28.637 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.637 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:28.637 ************************************ 00:32:28.637 END TEST nvmf_zcopy 00:32:28.637 ************************************ 00:32:28.637 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:28.637 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:28.637 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:28.637 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:28.637 ************************************ 00:32:28.637 START TEST nvmf_nmic 00:32:28.637 ************************************ 00:32:28.637 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:28.897 * Looking for test storage... 00:32:28.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:28.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.897 --rc genhtml_branch_coverage=1 00:32:28.897 --rc genhtml_function_coverage=1 00:32:28.897 --rc genhtml_legend=1 00:32:28.897 --rc geninfo_all_blocks=1 00:32:28.897 --rc geninfo_unexecuted_blocks=1 00:32:28.897 00:32:28.897 ' 00:32:28.897 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:28.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.897 --rc genhtml_branch_coverage=1 00:32:28.897 --rc genhtml_function_coverage=1 00:32:28.897 --rc genhtml_legend=1 00:32:28.897 --rc geninfo_all_blocks=1 00:32:28.897 --rc geninfo_unexecuted_blocks=1 00:32:28.897 00:32:28.897 ' 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:28.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.898 --rc genhtml_branch_coverage=1 00:32:28.898 --rc genhtml_function_coverage=1 00:32:28.898 --rc genhtml_legend=1 00:32:28.898 --rc geninfo_all_blocks=1 00:32:28.898 --rc geninfo_unexecuted_blocks=1 00:32:28.898 00:32:28.898 ' 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:28.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.898 --rc genhtml_branch_coverage=1 00:32:28.898 --rc genhtml_function_coverage=1 00:32:28.898 --rc genhtml_legend=1 00:32:28.898 --rc geninfo_all_blocks=1 00:32:28.898 --rc geninfo_unexecuted_blocks=1 00:32:28.898 00:32:28.898 ' 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:28.898 07:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:34.174 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:34.174 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:34.174 Found net devices under 0000:86:00.0: cvl_0_0 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:34.174 Found net devices under 0000:86:00.1: cvl_0_1 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.174 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:34.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:32:34.175 00:32:34.175 --- 10.0.0.2 ping statistics --- 00:32:34.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.175 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:32:34.175 00:32:34.175 --- 10.0.0.1 ping statistics --- 00:32:34.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.175 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:34.175 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=958684 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 958684 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 958684 ']' 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.175 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.175 [2024-11-26 07:42:02.083720] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:34.175 [2024-11-26 07:42:02.084650] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:32:34.175 [2024-11-26 07:42:02.084686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.175 [2024-11-26 07:42:02.151526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:34.175 [2024-11-26 07:42:02.196278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.175 [2024-11-26 07:42:02.196316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.175 [2024-11-26 07:42:02.196323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.175 [2024-11-26 07:42:02.196329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.175 [2024-11-26 07:42:02.196335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.175 [2024-11-26 07:42:02.197795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.175 [2024-11-26 07:42:02.197830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:34.175 [2024-11-26 07:42:02.197921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:34.175 [2024-11-26 07:42:02.197922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.175 [2024-11-26 07:42:02.266403] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:34.175 [2024-11-26 07:42:02.266489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:34.175 [2024-11-26 07:42:02.266617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:34.175 [2024-11-26 07:42:02.266835] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:34.435 [2024-11-26 07:42:02.267025] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 [2024-11-26 07:42:02.330602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 Malloc0 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 [2024-11-26 07:42:02.394601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:34.435 test case1: single bdev can't be used in multiple subsystems 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 [2024-11-26 07:42:02.418351] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:34.435 [2024-11-26 07:42:02.418372] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:34.435 [2024-11-26 07:42:02.418380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:34.435 request: 00:32:34.435 { 00:32:34.435 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:34.435 "namespace": { 00:32:34.435 "bdev_name": "Malloc0", 00:32:34.435 "no_auto_visible": false 00:32:34.435 }, 00:32:34.435 "method": "nvmf_subsystem_add_ns", 00:32:34.435 "req_id": 1 00:32:34.435 } 00:32:34.435 Got JSON-RPC error response 00:32:34.435 response: 00:32:34.435 { 00:32:34.435 "code": -32602, 00:32:34.435 "message": "Invalid parameters" 00:32:34.435 } 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:34.435 Adding namespace failed - expected result. 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:34.435 test case2: host connect to nvmf target in multiple paths 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.435 [2024-11-26 07:42:02.426452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.435 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:34.694 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:34.953 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:34.953 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:34.953 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:34.953 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:34.953 07:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:36.857 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:36.857 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:36.857 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:36.857 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:36.857 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:36.857 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:36.857 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:36.857 [global] 00:32:36.857 thread=1 00:32:36.857 invalidate=1 00:32:36.857 rw=write 00:32:36.857 time_based=1 00:32:36.857 runtime=1 00:32:36.857 ioengine=libaio 00:32:36.857 direct=1 00:32:36.857 bs=4096 00:32:36.857 iodepth=1 00:32:36.857 norandommap=0 00:32:36.857 numjobs=1 00:32:36.857 00:32:36.857 verify_dump=1 00:32:36.857 verify_backlog=512 00:32:36.857 verify_state_save=0 00:32:36.857 do_verify=1 00:32:36.857 verify=crc32c-intel 00:32:36.857 [job0] 00:32:36.857 filename=/dev/nvme0n1 00:32:36.857 Could not set queue depth (nvme0n1) 00:32:37.116 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:37.116 fio-3.35 00:32:37.116 Starting 1 thread 00:32:38.493 00:32:38.493 job0: (groupid=0, jobs=1): err= 0: pid=959397: Tue Nov 26 07:42:06 2024 00:32:38.493 read: IOPS=22, BW=90.1KiB/s (92.3kB/s)(92.0KiB/1021msec) 00:32:38.493 slat (nsec): min=10209, max=24251, avg=22097.83, stdev=2647.05 00:32:38.493 clat (usec): min=40874, max=41051, avg=40965.54, stdev=47.55 00:32:38.493 lat (usec): min=40896, max=41074, avg=40987.64, stdev=46.54 00:32:38.493 clat percentiles (usec): 00:32:38.493 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:38.493 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:38.494 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:38.494 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:38.494 | 99.99th=[41157] 00:32:38.494 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:32:38.494 slat (nsec): min=9039, max=39211, avg=10142.93, stdev=1794.87 00:32:38.494 clat (usec): min=126, max=376, avg=140.57, stdev=12.65 00:32:38.494 lat (usec): min=141, max=416, avg=150.71, stdev=13.76 00:32:38.494 clat percentiles (usec): 00:32:38.494 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 137], 00:32:38.494 | 30.00th=[ 139], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:32:38.494 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 145], 95.00th=[ 147], 00:32:38.494 | 99.00th=[ 163], 99.50th=[ 227], 99.90th=[ 379], 99.95th=[ 379], 00:32:38.494 | 99.99th=[ 379] 00:32:38.494 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:38.494 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:38.494 lat (usec) : 250=95.51%, 500=0.19% 00:32:38.494 lat (msec) : 50=4.30% 00:32:38.494 cpu : usr=0.39%, sys=0.29%, ctx=535, majf=0, minf=1 00:32:38.494 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.494 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.494 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:38.494 00:32:38.494 Run status group 0 (all jobs): 00:32:38.494 READ: bw=90.1KiB/s (92.3kB/s), 90.1KiB/s-90.1KiB/s (92.3kB/s-92.3kB/s), io=92.0KiB (94.2kB), run=1021-1021msec 00:32:38.494 WRITE: bw=2006KiB/s (2054kB/s), 2006KiB/s-2006KiB/s (2054kB/s-2054kB/s), io=2048KiB (2097kB), run=1021-1021msec 00:32:38.494 00:32:38.494 Disk stats (read/write): 00:32:38.494 nvme0n1: ios=70/512, merge=0/0, ticks=870/73, in_queue=943, util=91.48% 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:38.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.494 rmmod nvme_tcp 00:32:38.494 rmmod nvme_fabrics 00:32:38.494 rmmod nvme_keyring 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 958684 ']' 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 958684 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 958684 ']' 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 958684 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:38.494 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 958684 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 958684' 00:32:38.754 killing process with pid 958684 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 958684 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 958684 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.754 07:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.290 07:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:41.290 00:32:41.290 real 0m12.187s 00:32:41.290 user 0m23.124s 00:32:41.290 sys 0m5.597s 00:32:41.290 07:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.290 07:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:41.290 ************************************ 00:32:41.290 END TEST nvmf_nmic 00:32:41.290 ************************************ 00:32:41.290 07:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:41.290 07:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:41.290 07:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.290 07:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:41.290 ************************************ 00:32:41.290 START TEST nvmf_fio_target 00:32:41.290 ************************************ 00:32:41.290 07:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:41.290 * Looking for test storage... 00:32:41.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.290 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:41.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.290 --rc genhtml_branch_coverage=1 00:32:41.290 --rc genhtml_function_coverage=1 00:32:41.290 --rc genhtml_legend=1 00:32:41.291 --rc geninfo_all_blocks=1 00:32:41.291 --rc geninfo_unexecuted_blocks=1 00:32:41.291 00:32:41.291 ' 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:41.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.291 --rc genhtml_branch_coverage=1 00:32:41.291 --rc genhtml_function_coverage=1 00:32:41.291 --rc genhtml_legend=1 00:32:41.291 --rc geninfo_all_blocks=1 00:32:41.291 --rc geninfo_unexecuted_blocks=1 00:32:41.291 00:32:41.291 ' 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:41.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.291 --rc genhtml_branch_coverage=1 00:32:41.291 --rc genhtml_function_coverage=1 00:32:41.291 --rc genhtml_legend=1 00:32:41.291 --rc geninfo_all_blocks=1 00:32:41.291 --rc geninfo_unexecuted_blocks=1 00:32:41.291 00:32:41.291 ' 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:41.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.291 --rc genhtml_branch_coverage=1 00:32:41.291 --rc genhtml_function_coverage=1 00:32:41.291 --rc genhtml_legend=1 00:32:41.291 --rc geninfo_all_blocks=1 00:32:41.291 --rc geninfo_unexecuted_blocks=1 00:32:41.291 00:32:41.291 ' 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.291 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.292 07:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:46.567 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.567 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:46.568 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:46.568 Found net devices under 0000:86:00.0: cvl_0_0 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:46.568 Found net devices under 0000:86:00.1: cvl_0_1 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.568 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:32:46.828 00:32:46.828 --- 10.0.0.2 ping statistics --- 00:32:46.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.828 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:32:46.828 00:32:46.828 --- 10.0.0.1 ping statistics --- 00:32:46.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.828 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=963328 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 963328 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 963328 ']' 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.828 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.828 [2024-11-26 07:42:14.916800] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:46.828 [2024-11-26 07:42:14.917758] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:32:46.828 [2024-11-26 07:42:14.917793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.087 [2024-11-26 07:42:14.984200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:47.087 [2024-11-26 07:42:15.026903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.087 [2024-11-26 07:42:15.026941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.087 [2024-11-26 07:42:15.026953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.087 [2024-11-26 07:42:15.026960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.087 [2024-11-26 07:42:15.026965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.087 [2024-11-26 07:42:15.028536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.087 [2024-11-26 07:42:15.028630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:47.087 [2024-11-26 07:42:15.028718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:47.087 [2024-11-26 07:42:15.028720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.087 [2024-11-26 07:42:15.095292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:47.087 [2024-11-26 07:42:15.095430] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:47.087 [2024-11-26 07:42:15.095674] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:47.087 [2024-11-26 07:42:15.095975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:47.087 [2024-11-26 07:42:15.096152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:47.087 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.087 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:47.087 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:47.087 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.087 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.087 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.087 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:47.346 [2024-11-26 07:42:15.333210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.346 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.605 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:47.605 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.865 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:47.865 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:48.123 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:48.123 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:48.123 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:48.123 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:48.382 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:48.640 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:48.640 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:48.899 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:48.899 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:49.158 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:49.158 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:49.158 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:49.416 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:49.416 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:49.675 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:49.675 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:49.675 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.932 [2024-11-26 07:42:17.921337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.932 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:50.190 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:50.449 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:50.707 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:50.707 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:50.707 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:50.707 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:50.707 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:50.707 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:52.609 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:52.609 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:52.609 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:52.609 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:52.609 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:52.609 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:52.609 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:52.897 [global] 00:32:52.898 thread=1 00:32:52.898 invalidate=1 00:32:52.898 rw=write 00:32:52.898 time_based=1 00:32:52.898 runtime=1 00:32:52.898 ioengine=libaio 00:32:52.898 direct=1 00:32:52.898 bs=4096 00:32:52.898 iodepth=1 00:32:52.898 norandommap=0 00:32:52.898 numjobs=1 00:32:52.898 00:32:52.898 verify_dump=1 00:32:52.898 verify_backlog=512 00:32:52.898 verify_state_save=0 00:32:52.898 do_verify=1 00:32:52.898 verify=crc32c-intel 00:32:52.898 [job0] 00:32:52.898 filename=/dev/nvme0n1 00:32:52.898 [job1] 00:32:52.898 filename=/dev/nvme0n2 00:32:52.898 [job2] 00:32:52.898 filename=/dev/nvme0n3 00:32:52.898 [job3] 00:32:52.898 filename=/dev/nvme0n4 00:32:52.898 Could not set queue depth (nvme0n1) 00:32:52.898 Could not set queue depth (nvme0n2) 00:32:52.898 Could not set queue depth (nvme0n3) 00:32:52.898 Could not set queue depth (nvme0n4) 00:32:53.159 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.159 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.159 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.159 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.159 fio-3.35 00:32:53.159 Starting 4 threads 00:32:54.534 00:32:54.534 job0: (groupid=0, jobs=1): err= 0: pid=964614: Tue Nov 26 07:42:22 2024 00:32:54.534 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:32:54.534 slat (nsec): min=9492, max=23587, avg=22314.00, stdev=2879.52 00:32:54.534 clat (usec): min=40609, max=41097, avg=40945.27, stdev=92.20 00:32:54.534 lat (usec): min=40618, max=41120, avg=40967.58, stdev=94.55 00:32:54.534 clat percentiles (usec): 00:32:54.534 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:54.534 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:54.534 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:54.534 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:54.534 | 99.99th=[41157] 00:32:54.534 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:32:54.534 slat (nsec): min=4366, max=39530, avg=10468.99, stdev=2274.58 00:32:54.534 clat (usec): min=150, max=415, avg=189.91, stdev=21.68 00:32:54.534 lat (usec): min=160, max=454, avg=200.38, stdev=22.45 00:32:54.534 clat percentiles (usec): 00:32:54.534 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:32:54.534 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:32:54.534 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 223], 00:32:54.534 | 99.00th=[ 255], 99.50th=[ 281], 99.90th=[ 416], 99.95th=[ 416], 00:32:54.534 | 99.99th=[ 416] 00:32:54.534 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:32:54.534 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:54.534 lat (usec) : 250=94.19%, 500=1.69% 00:32:54.534 lat (msec) : 50=4.12% 00:32:54.534 cpu : usr=0.20%, sys=0.50%, ctx=536, majf=0, minf=1 00:32:54.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.534 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.534 job1: (groupid=0, jobs=1): err= 0: pid=964627: Tue Nov 26 07:42:22 2024 00:32:54.534 read: IOPS=22, BW=88.9KiB/s (91.0kB/s)(92.0KiB/1035msec) 00:32:54.534 slat (nsec): min=9475, max=22637, avg=21528.61, stdev=2641.91 00:32:54.534 clat (usec): min=40657, max=41081, avg=40950.80, stdev=89.15 00:32:54.534 lat (usec): min=40666, max=41104, avg=40972.32, stdev=91.02 00:32:54.534 clat percentiles (usec): 00:32:54.534 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:54.534 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:54.534 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:54.534 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:54.534 | 99.99th=[41157] 00:32:54.534 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:32:54.534 slat (nsec): min=9176, max=39767, avg=10291.80, stdev=1968.18 00:32:54.534 clat (usec): min=130, max=299, avg=168.99, stdev=12.90 00:32:54.534 lat (usec): min=148, max=339, avg=179.28, stdev=13.47 00:32:54.534 clat percentiles (usec): 00:32:54.534 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:32:54.534 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:32:54.534 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:32:54.534 | 99.00th=[ 219], 99.50th=[ 251], 99.90th=[ 302], 99.95th=[ 302], 00:32:54.534 | 99.99th=[ 302] 00:32:54.534 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:32:54.534 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:54.534 lat (usec) : 250=95.14%, 500=0.56% 00:32:54.534 lat (msec) : 50=4.30% 00:32:54.534 cpu : usr=0.29%, sys=0.48%, ctx=535, majf=0, minf=2 00:32:54.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.534 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.534 job2: (groupid=0, jobs=1): err= 0: pid=964643: Tue Nov 26 07:42:22 2024 00:32:54.534 read: IOPS=152, BW=611KiB/s (626kB/s)(612KiB/1001msec) 00:32:54.534 slat (nsec): min=7087, max=32958, avg=10312.57, stdev=5755.40 00:32:54.534 clat (usec): min=217, max=41319, avg=5832.28, stdev=14068.45 00:32:54.534 lat (usec): min=226, max=41328, avg=5842.59, stdev=14073.48 00:32:54.534 clat percentiles (usec): 00:32:54.534 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 231], 00:32:54.534 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 243], 00:32:54.534 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[41157], 95.00th=[41157], 00:32:54.534 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:54.534 | 99.99th=[41157] 00:32:54.534 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:32:54.534 slat (nsec): min=9802, max=39139, avg=10912.11, stdev=1722.83 00:32:54.534 clat (usec): min=163, max=408, avg=193.69, stdev=18.96 00:32:54.534 lat (usec): min=174, max=448, avg=204.60, stdev=19.69 00:32:54.534 clat percentiles (usec): 00:32:54.534 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:32:54.534 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:32:54.534 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 221], 00:32:54.534 | 99.00th=[ 260], 99.50th=[ 289], 99.90th=[ 408], 99.95th=[ 408], 00:32:54.534 | 99.99th=[ 408] 00:32:54.534 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:32:54.534 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:54.534 lat (usec) : 250=93.23%, 500=3.61% 00:32:54.534 lat (msec) : 50=3.16% 00:32:54.534 cpu : usr=0.50%, sys=0.40%, ctx=666, majf=0, minf=1 00:32:54.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.534 issued rwts: total=153,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.534 job3: (groupid=0, jobs=1): err= 0: pid=964648: Tue Nov 26 07:42:22 2024 00:32:54.534 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:32:54.534 slat (nsec): min=9977, max=24418, avg=22874.32, stdev=2916.40 00:32:54.534 clat (usec): min=40882, max=41819, avg=41001.48, stdev=188.14 00:32:54.534 lat (usec): min=40906, max=41843, avg=41024.35, stdev=188.47 00:32:54.534 clat percentiles (usec): 00:32:54.534 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:54.534 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:54.534 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:54.534 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:54.534 | 99.99th=[41681] 00:32:54.534 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:32:54.534 slat (usec): min=9, max=20411, avg=51.38, stdev=901.57 00:32:54.534 clat (usec): min=136, max=799, avg=179.52, stdev=42.81 00:32:54.534 lat (usec): min=146, max=21211, avg=230.90, stdev=929.64 00:32:54.534 clat percentiles (usec): 00:32:54.534 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 161], 00:32:54.534 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:32:54.534 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 215], 00:32:54.534 | 99.00th=[ 253], 99.50th=[ 367], 99.90th=[ 799], 99.95th=[ 799], 00:32:54.534 | 99.99th=[ 799] 00:32:54.534 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:32:54.534 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:54.534 lat (usec) : 250=94.57%, 500=0.94%, 750=0.19%, 1000=0.19% 00:32:54.534 lat (msec) : 50=4.12% 00:32:54.534 cpu : usr=0.49%, sys=0.29%, ctx=536, majf=0, minf=1 00:32:54.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.534 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.534 00:32:54.534 Run status group 0 (all jobs): 00:32:54.534 READ: bw=850KiB/s (871kB/s), 86.1KiB/s-611KiB/s (88.2kB/s-626kB/s), io=880KiB (901kB), run=1001-1035msec 00:32:54.534 WRITE: bw=7915KiB/s (8105kB/s), 1979KiB/s-2046KiB/s (2026kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1035msec 00:32:54.534 00:32:54.534 Disk stats (read/write): 00:32:54.534 nvme0n1: ios=41/512, merge=0/0, ticks=1601/94, in_queue=1695, util=85.87% 00:32:54.534 nvme0n2: ios=68/512, merge=0/0, ticks=802/86, in_queue=888, util=90.96% 00:32:54.534 nvme0n3: ios=75/512, merge=0/0, ticks=1669/97, in_queue=1766, util=93.64% 00:32:54.534 nvme0n4: ios=66/512, merge=0/0, ticks=925/88, in_queue=1013, util=95.27% 00:32:54.535 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:54.535 [global] 00:32:54.535 thread=1 00:32:54.535 invalidate=1 00:32:54.535 rw=randwrite 00:32:54.535 time_based=1 00:32:54.535 runtime=1 00:32:54.535 ioengine=libaio 00:32:54.535 direct=1 00:32:54.535 bs=4096 00:32:54.535 iodepth=1 00:32:54.535 norandommap=0 00:32:54.535 numjobs=1 00:32:54.535 00:32:54.535 verify_dump=1 00:32:54.535 verify_backlog=512 00:32:54.535 verify_state_save=0 00:32:54.535 do_verify=1 00:32:54.535 verify=crc32c-intel 00:32:54.535 [job0] 00:32:54.535 filename=/dev/nvme0n1 00:32:54.535 [job1] 00:32:54.535 filename=/dev/nvme0n2 00:32:54.535 [job2] 00:32:54.535 filename=/dev/nvme0n3 00:32:54.535 [job3] 00:32:54.535 filename=/dev/nvme0n4 00:32:54.535 Could not set queue depth (nvme0n1) 00:32:54.535 Could not set queue depth (nvme0n2) 00:32:54.535 Could not set queue depth (nvme0n3) 00:32:54.535 Could not set queue depth (nvme0n4) 00:32:54.535 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.535 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.535 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.535 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.535 fio-3.35 00:32:54.535 Starting 4 threads 00:32:55.909 00:32:55.909 job0: (groupid=0, jobs=1): err= 0: pid=965033: Tue Nov 26 07:42:23 2024 00:32:55.909 read: IOPS=1996, BW=7984KiB/s (8176kB/s)(8200KiB/1027msec) 00:32:55.909 slat (nsec): min=7212, max=40380, avg=8262.79, stdev=1216.74 00:32:55.909 clat (usec): min=201, max=41050, avg=268.19, stdev=1272.84 00:32:55.909 lat (usec): min=209, max=41072, avg=276.45, stdev=1273.08 00:32:55.909 clat percentiles (usec): 00:32:55.909 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 221], 00:32:55.909 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 229], 00:32:55.909 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 249], 00:32:55.909 | 99.00th=[ 273], 99.50th=[ 297], 99.90th=[ 400], 99.95th=[41157], 00:32:55.909 | 99.99th=[41157] 00:32:55.909 write: IOPS=2492, BW=9971KiB/s (10.2MB/s)(10.0MiB/1027msec); 0 zone resets 00:32:55.909 slat (nsec): min=10346, max=44488, avg=11506.78, stdev=1482.41 00:32:55.909 clat (usec): min=132, max=302, avg=162.83, stdev=21.11 00:32:55.909 lat (usec): min=143, max=313, avg=174.33, stdev=21.30 00:32:55.909 clat percentiles (usec): 00:32:55.909 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:32:55.909 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:32:55.909 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 190], 95.00th=[ 215], 00:32:55.909 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 265], 99.95th=[ 285], 00:32:55.909 | 99.99th=[ 302] 00:32:55.909 bw ( KiB/s): min= 9520, max=10960, per=45.59%, avg=10240.00, stdev=1018.23, samples=2 00:32:55.909 iops : min= 2380, max= 2740, avg=2560.00, stdev=254.56, samples=2 00:32:55.909 lat (usec) : 250=97.68%, 500=2.28% 00:32:55.909 lat (msec) : 50=0.04% 00:32:55.909 cpu : usr=3.80%, sys=7.12%, ctx=4611, majf=0, minf=1 00:32:55.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.909 issued rwts: total=2050,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.909 job1: (groupid=0, jobs=1): err= 0: pid=965034: Tue Nov 26 07:42:23 2024 00:32:55.909 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:55.909 slat (nsec): min=7177, max=41492, avg=8471.47, stdev=1366.16 00:32:55.909 clat (usec): min=198, max=784, avg=269.92, stdev=70.72 00:32:55.909 lat (usec): min=205, max=793, avg=278.39, stdev=70.81 00:32:55.909 clat percentiles (usec): 00:32:55.909 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 233], 00:32:55.909 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:32:55.909 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 408], 95.00th=[ 449], 00:32:55.909 | 99.00th=[ 474], 99.50th=[ 486], 99.90th=[ 570], 99.95th=[ 635], 00:32:55.909 | 99.99th=[ 783] 00:32:55.910 write: IOPS=2180, BW=8723KiB/s (8933kB/s)(8732KiB/1001msec); 0 zone resets 00:32:55.910 slat (nsec): min=9838, max=76433, avg=11446.81, stdev=2550.50 00:32:55.910 clat (usec): min=129, max=296, avg=179.53, stdev=38.49 00:32:55.910 lat (usec): min=139, max=327, avg=190.98, stdev=38.67 00:32:55.910 clat percentiles (usec): 00:32:55.910 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:32:55.910 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 178], 00:32:55.910 | 70.00th=[ 188], 80.00th=[ 206], 90.00th=[ 251], 95.00th=[ 265], 00:32:55.910 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 285], 00:32:55.910 | 99.99th=[ 297] 00:32:55.910 bw ( KiB/s): min= 8192, max= 8192, per=36.47%, avg=8192.00, stdev= 0.00, samples=1 00:32:55.910 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:55.910 lat (usec) : 250=78.23%, 500=21.67%, 750=0.07%, 1000=0.02% 00:32:55.910 cpu : usr=4.50%, sys=5.90%, ctx=4232, majf=0, minf=1 00:32:55.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.910 issued rwts: total=2048,2183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.910 job2: (groupid=0, jobs=1): err= 0: pid=965035: Tue Nov 26 07:42:23 2024 00:32:55.910 read: IOPS=398, BW=1593KiB/s (1631kB/s)(1636KiB/1027msec) 00:32:55.910 slat (nsec): min=4881, max=26866, avg=7499.89, stdev=3845.03 00:32:55.910 clat (usec): min=203, max=41133, avg=2236.82, stdev=8787.86 00:32:55.910 lat (usec): min=209, max=41157, avg=2244.32, stdev=8791.28 00:32:55.910 clat percentiles (usec): 00:32:55.910 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:32:55.910 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 258], 00:32:55.910 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 449], 00:32:55.910 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:55.910 | 99.99th=[41157] 00:32:55.910 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:32:55.910 slat (nsec): min=5759, max=27459, avg=10621.50, stdev=2921.60 00:32:55.910 clat (usec): min=152, max=321, avg=197.16, stdev=28.28 00:32:55.910 lat (usec): min=164, max=334, avg=207.78, stdev=28.98 00:32:55.910 clat percentiles (usec): 00:32:55.910 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:32:55.910 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:32:55.910 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 227], 95.00th=[ 260], 00:32:55.910 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 322], 99.95th=[ 322], 00:32:55.910 | 99.99th=[ 322] 00:32:55.910 bw ( KiB/s): min= 4096, max= 4096, per=18.24%, avg=4096.00, stdev= 0.00, samples=1 00:32:55.910 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:55.910 lat (usec) : 250=76.00%, 500=21.82% 00:32:55.910 lat (msec) : 50=2.17% 00:32:55.910 cpu : usr=0.58%, sys=0.68%, ctx=921, majf=0, minf=1 00:32:55.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.910 issued rwts: total=409,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.910 job3: (groupid=0, jobs=1): err= 0: pid=965036: Tue Nov 26 07:42:23 2024 00:32:55.910 read: IOPS=22, BW=90.9KiB/s (93.1kB/s)(92.0KiB/1012msec) 00:32:55.910 slat (nsec): min=7227, max=24671, avg=22793.57, stdev=4592.40 00:32:55.910 clat (usec): min=313, max=41026, avg=39181.36, stdev=8473.60 00:32:55.910 lat (usec): min=322, max=41050, avg=39204.16, stdev=8476.57 00:32:55.910 clat percentiles (usec): 00:32:55.910 | 1.00th=[ 314], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:32:55.910 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:55.910 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:55.910 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:55.910 | 99.99th=[41157] 00:32:55.910 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:32:55.910 slat (nsec): min=9201, max=45322, avg=10429.78, stdev=1886.08 00:32:55.910 clat (usec): min=164, max=308, avg=201.72, stdev=25.50 00:32:55.910 lat (usec): min=174, max=343, avg=212.15, stdev=25.88 00:32:55.910 clat percentiles (usec): 00:32:55.910 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 178], 00:32:55.910 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 208], 00:32:55.910 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 243], 00:32:55.910 | 99.00th=[ 260], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 310], 00:32:55.910 | 99.99th=[ 310] 00:32:55.910 bw ( KiB/s): min= 4096, max= 4096, per=18.24%, avg=4096.00, stdev= 0.00, samples=1 00:32:55.910 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:55.910 lat (usec) : 250=93.64%, 500=2.24% 00:32:55.910 lat (msec) : 50=4.11% 00:32:55.910 cpu : usr=0.30%, sys=0.49%, ctx=537, majf=0, minf=1 00:32:55.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.910 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.910 00:32:55.910 Run status group 0 (all jobs): 00:32:55.910 READ: bw=17.2MiB/s (18.1MB/s), 90.9KiB/s-8184KiB/s (93.1kB/s-8380kB/s), io=17.7MiB (18.6MB), run=1001-1027msec 00:32:55.910 WRITE: bw=21.9MiB/s (23.0MB/s), 1994KiB/s-9971KiB/s (2042kB/s-10.2MB/s), io=22.5MiB (23.6MB), run=1001-1027msec 00:32:55.910 00:32:55.910 Disk stats (read/write): 00:32:55.910 nvme0n1: ios=2068/2048, merge=0/0, ticks=1347/335, in_queue=1682, util=90.28% 00:32:55.910 nvme0n2: ios=1586/2040, merge=0/0, ticks=477/362, in_queue=839, util=91.38% 00:32:55.910 nvme0n3: ios=461/512, merge=0/0, ticks=780/96, in_queue=876, util=94.91% 00:32:55.910 nvme0n4: ios=50/512, merge=0/0, ticks=1126/95, in_queue=1221, util=100.00% 00:32:55.910 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:55.910 [global] 00:32:55.910 thread=1 00:32:55.910 invalidate=1 00:32:55.910 rw=write 00:32:55.910 time_based=1 00:32:55.910 runtime=1 00:32:55.910 ioengine=libaio 00:32:55.910 direct=1 00:32:55.910 bs=4096 00:32:55.910 iodepth=128 00:32:55.910 norandommap=0 00:32:55.910 numjobs=1 00:32:55.910 00:32:55.910 verify_dump=1 00:32:55.910 verify_backlog=512 00:32:55.910 verify_state_save=0 00:32:55.910 do_verify=1 00:32:55.910 verify=crc32c-intel 00:32:55.910 [job0] 00:32:55.910 filename=/dev/nvme0n1 00:32:55.910 [job1] 00:32:55.910 filename=/dev/nvme0n2 00:32:55.910 [job2] 00:32:55.910 filename=/dev/nvme0n3 00:32:55.910 [job3] 00:32:55.910 filename=/dev/nvme0n4 00:32:55.910 Could not set queue depth (nvme0n1) 00:32:55.910 Could not set queue depth (nvme0n2) 00:32:55.910 Could not set queue depth (nvme0n3) 00:32:55.910 Could not set queue depth (nvme0n4) 00:32:56.167 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.167 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.167 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.167 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.167 fio-3.35 00:32:56.167 Starting 4 threads 00:32:57.542 00:32:57.542 job0: (groupid=0, jobs=1): err= 0: pid=965406: Tue Nov 26 07:42:25 2024 00:32:57.542 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:32:57.542 slat (nsec): min=1295, max=50374k, avg=162608.66, stdev=1227294.17 00:32:57.542 clat (usec): min=7978, max=61234, avg=20368.40, stdev=9005.82 00:32:57.542 lat (usec): min=8636, max=61237, avg=20531.01, stdev=9047.08 00:32:57.542 clat percentiles (usec): 00:32:57.542 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10945], 00:32:57.542 | 30.00th=[12649], 40.00th=[17433], 50.00th=[20055], 60.00th=[22152], 00:32:57.542 | 70.00th=[24249], 80.00th=[27919], 90.00th=[30016], 95.00th=[34866], 00:32:57.542 | 99.00th=[60031], 99.50th=[60556], 99.90th=[61080], 99.95th=[61080], 00:32:57.542 | 99.99th=[61080] 00:32:57.542 write: IOPS=3258, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1004msec); 0 zone resets 00:32:57.542 slat (usec): min=2, max=11426, avg=146.69, stdev=826.93 00:32:57.542 clat (usec): min=1877, max=60750, avg=19368.22, stdev=9852.50 00:32:57.542 lat (usec): min=5580, max=60760, avg=19514.92, stdev=9853.94 00:32:57.542 clat percentiles (usec): 00:32:57.542 | 1.00th=[ 8160], 5.00th=[10159], 10.00th=[10290], 20.00th=[10683], 00:32:57.542 | 30.00th=[11994], 40.00th=[16319], 50.00th=[17171], 60.00th=[19792], 00:32:57.542 | 70.00th=[22938], 80.00th=[24511], 90.00th=[30802], 95.00th=[35390], 00:32:57.542 | 99.00th=[60556], 99.50th=[60556], 99.90th=[60556], 99.95th=[60556], 00:32:57.542 | 99.99th=[60556] 00:32:57.542 bw ( KiB/s): min=11528, max=13624, per=18.76%, avg=12576.00, stdev=1482.10, samples=2 00:32:57.542 iops : min= 2882, max= 3406, avg=3144.00, stdev=370.52, samples=2 00:32:57.542 lat (msec) : 2=0.02%, 10=4.76%, 20=50.35%, 50=42.88%, 100=2.00% 00:32:57.542 cpu : usr=4.09%, sys=3.29%, ctx=268, majf=0, minf=1 00:32:57.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:57.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.542 issued rwts: total=3072,3272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.542 job1: (groupid=0, jobs=1): err= 0: pid=965407: Tue Nov 26 07:42:25 2024 00:32:57.542 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:32:57.542 slat (nsec): min=1406, max=13489k, avg=129615.56, stdev=824121.82 00:32:57.542 clat (usec): min=8500, max=52346, avg=17144.45, stdev=6952.51 00:32:57.542 lat (usec): min=8510, max=52349, avg=17274.07, stdev=6985.19 00:32:57.542 clat percentiles (usec): 00:32:57.542 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[10814], 20.00th=[11600], 00:32:57.542 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14091], 60.00th=[16319], 00:32:57.542 | 70.00th=[20055], 80.00th=[22152], 90.00th=[27657], 95.00th=[31327], 00:32:57.542 | 99.00th=[39060], 99.50th=[39060], 99.90th=[42206], 99.95th=[52167], 00:32:57.542 | 99.99th=[52167] 00:32:57.542 write: IOPS=3407, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1003msec); 0 zone resets 00:32:57.542 slat (usec): min=2, max=45313, avg=169.19, stdev=1456.41 00:32:57.542 clat (msec): min=2, max=175, avg=17.75, stdev=14.59 00:32:57.542 lat (msec): min=3, max=175, avg=17.91, stdev=14.86 00:32:57.542 clat percentiles (msec): 00:32:57.542 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:32:57.542 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 15], 00:32:57.542 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 35], 95.00th=[ 44], 00:32:57.542 | 99.00th=[ 83], 99.50th=[ 87], 99.90th=[ 146], 99.95th=[ 176], 00:32:57.542 | 99.99th=[ 176] 00:32:57.542 bw ( KiB/s): min=10768, max=15560, per=19.64%, avg=13164.00, stdev=3388.46, samples=2 00:32:57.542 iops : min= 2692, max= 3890, avg=3291.00, stdev=847.11, samples=2 00:32:57.542 lat (msec) : 4=0.26%, 10=8.07%, 20=65.99%, 50=24.64%, 100=0.91% 00:32:57.542 lat (msec) : 250=0.12% 00:32:57.542 cpu : usr=2.89%, sys=4.09%, ctx=334, majf=0, minf=1 00:32:57.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:57.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.542 issued rwts: total=3072,3418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.542 job2: (groupid=0, jobs=1): err= 0: pid=965408: Tue Nov 26 07:42:25 2024 00:32:57.542 read: IOPS=4266, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1004msec) 00:32:57.542 slat (nsec): min=1053, max=8549.2k, avg=106377.34, stdev=664135.23 00:32:57.542 clat (usec): min=993, max=75529, avg=14718.54, stdev=5571.13 00:32:57.542 lat (usec): min=4114, max=75535, avg=14824.92, stdev=5618.06 00:32:57.542 clat percentiles (usec): 00:32:57.542 | 1.00th=[ 5538], 5.00th=[ 7701], 10.00th=[ 9634], 20.00th=[10290], 00:32:57.542 | 30.00th=[11338], 40.00th=[13304], 50.00th=[14746], 60.00th=[15270], 00:32:57.542 | 70.00th=[16909], 80.00th=[18220], 90.00th=[20841], 95.00th=[21627], 00:32:57.542 | 99.00th=[26084], 99.50th=[26346], 99.90th=[66847], 99.95th=[66847], 00:32:57.542 | 99.99th=[76022] 00:32:57.542 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:32:57.542 slat (nsec): min=1925, max=12269k, avg=106296.74, stdev=674734.43 00:32:57.542 clat (usec): min=477, max=41123, avg=13961.95, stdev=5501.01 00:32:57.542 lat (usec): min=501, max=41147, avg=14068.25, stdev=5570.44 00:32:57.542 clat percentiles (usec): 00:32:57.542 | 1.00th=[ 3326], 5.00th=[ 6521], 10.00th=[ 8160], 20.00th=[10945], 00:32:57.542 | 30.00th=[11600], 40.00th=[11863], 50.00th=[13173], 60.00th=[14222], 00:32:57.542 | 70.00th=[14746], 80.00th=[16188], 90.00th=[19268], 95.00th=[26870], 00:32:57.542 | 99.00th=[31589], 99.50th=[32637], 99.90th=[35914], 99.95th=[40109], 00:32:57.542 | 99.99th=[41157] 00:32:57.542 bw ( KiB/s): min=17296, max=19568, per=27.50%, avg=18432.00, stdev=1606.55, samples=2 00:32:57.542 iops : min= 4324, max= 4892, avg=4608.00, stdev=401.64, samples=2 00:32:57.542 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:57.542 lat (msec) : 2=0.19%, 4=0.70%, 10=14.80%, 20=73.91%, 50=10.16% 00:32:57.542 lat (msec) : 100=0.21% 00:32:57.542 cpu : usr=3.69%, sys=5.88%, ctx=366, majf=0, minf=1 00:32:57.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:57.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.542 issued rwts: total=4284,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.542 job3: (groupid=0, jobs=1): err= 0: pid=965409: Tue Nov 26 07:42:25 2024 00:32:57.542 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:32:57.542 slat (nsec): min=1133, max=11369k, avg=75704.83, stdev=628233.88 00:32:57.542 clat (usec): min=2053, max=44740, avg=11281.83, stdev=5286.80 00:32:57.543 lat (usec): min=2059, max=49806, avg=11357.54, stdev=5343.39 00:32:57.543 clat percentiles (usec): 00:32:57.543 | 1.00th=[ 4621], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 8586], 00:32:57.543 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10683], 00:32:57.543 | 70.00th=[11076], 80.00th=[12387], 90.00th=[16319], 95.00th=[21890], 00:32:57.543 | 99.00th=[34866], 99.50th=[39060], 99.90th=[43779], 99.95th=[43779], 00:32:57.543 | 99.99th=[44827] 00:32:57.543 write: IOPS=5514, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1002msec); 0 zone resets 00:32:57.543 slat (nsec): min=1958, max=8891.3k, avg=80183.05, stdev=505242.10 00:32:57.543 clat (usec): min=300, max=38475, avg=12490.85, stdev=8369.44 00:32:57.543 lat (usec): min=377, max=38503, avg=12571.03, stdev=8434.04 00:32:57.543 clat percentiles (usec): 00:32:57.543 | 1.00th=[ 2180], 5.00th=[ 4293], 10.00th=[ 5276], 20.00th=[ 6849], 00:32:57.543 | 30.00th=[ 7832], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10552], 00:32:57.543 | 70.00th=[11207], 80.00th=[16909], 90.00th=[27657], 95.00th=[30802], 00:32:57.543 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[38536], 00:32:57.543 | 99.99th=[38536] 00:32:57.543 bw ( KiB/s): min=28672, max=28672, per=42.78%, avg=28672.00, stdev= 0.00, samples=1 00:32:57.543 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:32:57.543 lat (usec) : 500=0.04%, 750=0.03% 00:32:57.543 lat (msec) : 2=0.27%, 4=1.89%, 10=51.07%, 20=34.55%, 50=12.15% 00:32:57.543 cpu : usr=3.40%, sys=6.09%, ctx=407, majf=0, minf=1 00:32:57.543 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:57.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.543 issued rwts: total=5120,5526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.543 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.543 00:32:57.543 Run status group 0 (all jobs): 00:32:57.543 READ: bw=60.5MiB/s (63.4MB/s), 12.0MiB/s-20.0MiB/s (12.5MB/s-20.9MB/s), io=60.7MiB (63.7MB), run=1002-1004msec 00:32:57.543 WRITE: bw=65.5MiB/s (68.6MB/s), 12.7MiB/s-21.5MiB/s (13.3MB/s-22.6MB/s), io=65.7MiB (68.9MB), run=1002-1004msec 00:32:57.543 00:32:57.543 Disk stats (read/write): 00:32:57.543 nvme0n1: ios=2610/2946, merge=0/0, ticks=13574/12826, in_queue=26400, util=86.77% 00:32:57.543 nvme0n2: ios=2571/2560, merge=0/0, ticks=16532/14551, in_queue=31083, util=99.09% 00:32:57.543 nvme0n3: ios=3584/4063, merge=0/0, ticks=24808/23644, in_queue=48452, util=88.88% 00:32:57.543 nvme0n4: ios=4650/4838, merge=0/0, ticks=47227/55393, in_queue=102620, util=99.48% 00:32:57.543 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:57.543 [global] 00:32:57.543 thread=1 00:32:57.543 invalidate=1 00:32:57.543 rw=randwrite 00:32:57.543 time_based=1 00:32:57.543 runtime=1 00:32:57.543 ioengine=libaio 00:32:57.543 direct=1 00:32:57.543 bs=4096 00:32:57.543 iodepth=128 00:32:57.543 norandommap=0 00:32:57.543 numjobs=1 00:32:57.543 00:32:57.543 verify_dump=1 00:32:57.543 verify_backlog=512 00:32:57.543 verify_state_save=0 00:32:57.543 do_verify=1 00:32:57.543 verify=crc32c-intel 00:32:57.543 [job0] 00:32:57.543 filename=/dev/nvme0n1 00:32:57.543 [job1] 00:32:57.543 filename=/dev/nvme0n2 00:32:57.543 [job2] 00:32:57.543 filename=/dev/nvme0n3 00:32:57.543 [job3] 00:32:57.543 filename=/dev/nvme0n4 00:32:57.543 Could not set queue depth (nvme0n1) 00:32:57.543 Could not set queue depth (nvme0n2) 00:32:57.543 Could not set queue depth (nvme0n3) 00:32:57.543 Could not set queue depth (nvme0n4) 00:32:57.801 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:57.801 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:57.801 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:57.801 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:57.801 fio-3.35 00:32:57.801 Starting 4 threads 00:32:59.176 00:32:59.176 job0: (groupid=0, jobs=1): err= 0: pid=965777: Tue Nov 26 07:42:27 2024 00:32:59.176 read: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec) 00:32:59.176 slat (nsec): min=1445, max=48891k, avg=298773.69, stdev=2104540.63 00:32:59.176 clat (usec): min=11874, max=97534, avg=38189.99, stdev=23351.40 00:32:59.176 lat (usec): min=14152, max=97544, avg=38488.76, stdev=23465.15 00:32:59.176 clat percentiles (usec): 00:32:59.176 | 1.00th=[14222], 5.00th=[14615], 10.00th=[16057], 20.00th=[16581], 00:32:59.176 | 30.00th=[17957], 40.00th=[24773], 50.00th=[29754], 60.00th=[34866], 00:32:59.176 | 70.00th=[54264], 80.00th=[63701], 90.00th=[74974], 95.00th=[80217], 00:32:59.176 | 99.00th=[96994], 99.50th=[96994], 99.90th=[98042], 99.95th=[98042], 00:32:59.176 | 99.99th=[98042] 00:32:59.176 write: IOPS=1861, BW=7446KiB/s (7624kB/s)(7520KiB/1010msec); 0 zone resets 00:32:59.176 slat (usec): min=2, max=35922, avg=281.44, stdev=1523.94 00:32:59.176 clat (usec): min=8005, max=99580, avg=35586.11, stdev=20376.22 00:32:59.176 lat (usec): min=9694, max=99591, avg=35867.55, stdev=20455.28 00:32:59.176 clat percentiles (usec): 00:32:59.176 | 1.00th=[11338], 5.00th=[14615], 10.00th=[15926], 20.00th=[22676], 00:32:59.176 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[29492], 00:32:59.176 | 70.00th=[41681], 80.00th=[51643], 90.00th=[70779], 95.00th=[76022], 00:32:59.176 | 99.00th=[99091], 99.50th=[99091], 99.90th=[99091], 99.95th=[99091], 00:32:59.176 | 99.99th=[99091] 00:32:59.176 bw ( KiB/s): min= 5832, max= 8192, per=11.08%, avg=7012.00, stdev=1668.77, samples=2 00:32:59.176 iops : min= 1458, max= 2048, avg=1753.00, stdev=417.19, samples=2 00:32:59.176 lat (msec) : 10=0.26%, 20=24.09%, 50=47.92%, 100=27.72% 00:32:59.176 cpu : usr=1.19%, sys=2.68%, ctx=236, majf=0, minf=1 00:32:59.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:32:59.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:59.176 issued rwts: total=1536,1880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:59.176 job1: (groupid=0, jobs=1): err= 0: pid=965778: Tue Nov 26 07:42:27 2024 00:32:59.176 read: IOPS=4144, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1011msec) 00:32:59.176 slat (nsec): min=1274, max=13090k, avg=106454.89, stdev=688132.21 00:32:59.176 clat (usec): min=5305, max=49828, avg=12855.74, stdev=5400.31 00:32:59.176 lat (usec): min=5311, max=49832, avg=12962.19, stdev=5457.70 00:32:59.176 clat percentiles (usec): 00:32:59.176 | 1.00th=[ 7046], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[ 9503], 00:32:59.176 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11338], 60.00th=[11994], 00:32:59.176 | 70.00th=[12911], 80.00th=[14746], 90.00th=[20055], 95.00th=[22152], 00:32:59.176 | 99.00th=[38011], 99.50th=[42730], 99.90th=[50070], 99.95th=[50070], 00:32:59.176 | 99.99th=[50070] 00:32:59.176 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:32:59.176 slat (usec): min=2, max=9582, avg=115.18, stdev=571.41 00:32:59.176 clat (usec): min=4116, max=54422, avg=16069.47, stdev=10476.44 00:32:59.176 lat (usec): min=4122, max=54429, avg=16184.65, stdev=10546.14 00:32:59.176 clat percentiles (usec): 00:32:59.176 | 1.00th=[ 5538], 5.00th=[ 7963], 10.00th=[ 8094], 20.00th=[ 8356], 00:32:59.176 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10814], 60.00th=[11731], 00:32:59.176 | 70.00th=[18482], 80.00th=[24773], 90.00th=[29754], 95.00th=[39584], 00:32:59.176 | 99.00th=[51119], 99.50th=[51643], 99.90th=[54264], 99.95th=[54264], 00:32:59.176 | 99.99th=[54264] 00:32:59.176 bw ( KiB/s): min=15560, max=21040, per=28.93%, avg=18300.00, stdev=3874.95, samples=2 00:32:59.176 iops : min= 3890, max= 5260, avg=4575.00, stdev=968.74, samples=2 00:32:59.176 lat (msec) : 10=34.49%, 20=45.58%, 50=18.87%, 100=1.07% 00:32:59.176 cpu : usr=3.47%, sys=4.36%, ctx=477, majf=0, minf=1 00:32:59.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:59.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:59.177 issued rwts: total=4190,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:59.177 job2: (groupid=0, jobs=1): err= 0: pid=965785: Tue Nov 26 07:42:27 2024 00:32:59.177 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:32:59.177 slat (nsec): min=1131, max=22852k, avg=122465.14, stdev=926239.94 00:32:59.177 clat (usec): min=2206, max=69516, avg=15407.51, stdev=9633.18 00:32:59.177 lat (usec): min=2213, max=69519, avg=15529.97, stdev=9710.65 00:32:59.177 clat percentiles (usec): 00:32:59.177 | 1.00th=[ 2835], 5.00th=[ 3326], 10.00th=[ 4686], 20.00th=[ 8029], 00:32:59.177 | 30.00th=[ 9634], 40.00th=[10945], 50.00th=[12780], 60.00th=[13566], 00:32:59.177 | 70.00th=[16909], 80.00th=[26084], 90.00th=[30540], 95.00th=[33817], 00:32:59.177 | 99.00th=[40633], 99.50th=[40633], 99.90th=[45351], 99.95th=[69731], 00:32:59.177 | 99.99th=[69731] 00:32:59.177 write: IOPS=3826, BW=14.9MiB/s (15.7MB/s)(15.1MiB/1011msec); 0 zone resets 00:32:59.177 slat (nsec): min=1873, max=17537k, avg=130574.66, stdev=876676.90 00:32:59.177 clat (usec): min=358, max=64462, avg=18851.64, stdev=12399.98 00:32:59.177 lat (usec): min=478, max=64467, avg=18982.21, stdev=12476.47 00:32:59.177 clat percentiles (usec): 00:32:59.177 | 1.00th=[ 2671], 5.00th=[ 5866], 10.00th=[ 7046], 20.00th=[ 9503], 00:32:59.177 | 30.00th=[11076], 40.00th=[11731], 50.00th=[14222], 60.00th=[17695], 00:32:59.177 | 70.00th=[22938], 80.00th=[28443], 90.00th=[35914], 95.00th=[41681], 00:32:59.177 | 99.00th=[62653], 99.50th=[62653], 99.90th=[64226], 99.95th=[64226], 00:32:59.177 | 99.99th=[64226] 00:32:59.177 bw ( KiB/s): min=13480, max=16456, per=23.66%, avg=14968.00, stdev=2104.35, samples=2 00:32:59.177 iops : min= 3370, max= 4114, avg=3742.00, stdev=526.09, samples=2 00:32:59.177 lat (usec) : 500=0.01%, 1000=0.04% 00:32:59.177 lat (msec) : 4=5.27%, 10=21.29%, 20=42.45%, 50=29.02%, 100=1.91% 00:32:59.177 cpu : usr=2.77%, sys=4.26%, ctx=349, majf=0, minf=1 00:32:59.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:59.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:59.177 issued rwts: total=3584,3869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:59.177 job3: (groupid=0, jobs=1): err= 0: pid=965786: Tue Nov 26 07:42:27 2024 00:32:59.177 read: IOPS=5383, BW=21.0MiB/s (22.1MB/s)(21.2MiB/1006msec) 00:32:59.177 slat (nsec): min=1384, max=14721k, avg=74343.77, stdev=662251.18 00:32:59.177 clat (usec): min=1727, max=47090, avg=10319.39, stdev=3661.82 00:32:59.177 lat (usec): min=3350, max=47097, avg=10393.73, stdev=3702.34 00:32:59.177 clat percentiles (usec): 00:32:59.177 | 1.00th=[ 5342], 5.00th=[ 6652], 10.00th=[ 7504], 20.00th=[ 8094], 00:32:59.177 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9896], 00:32:59.177 | 70.00th=[10814], 80.00th=[12256], 90.00th=[14484], 95.00th=[16450], 00:32:59.177 | 99.00th=[26608], 99.50th=[27657], 99.90th=[42730], 99.95th=[46924], 00:32:59.177 | 99.99th=[46924] 00:32:59.177 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:32:59.177 slat (usec): min=2, max=20815, avg=90.69, stdev=684.59 00:32:59.177 clat (usec): min=452, max=87039, avg=12706.54, stdev=11168.59 00:32:59.177 lat (usec): min=480, max=87043, avg=12797.23, stdev=11247.06 00:32:59.177 clat percentiles (usec): 00:32:59.177 | 1.00th=[ 2966], 5.00th=[ 4752], 10.00th=[ 5735], 20.00th=[ 7373], 00:32:59.177 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:32:59.177 | 70.00th=[10683], 80.00th=[15926], 90.00th=[23725], 95.00th=[29492], 00:32:59.177 | 99.00th=[73925], 99.50th=[79168], 99.90th=[84411], 99.95th=[86508], 00:32:59.177 | 99.99th=[87557] 00:32:59.177 bw ( KiB/s): min=20952, max=24104, per=35.61%, avg=22528.00, stdev=2228.80, samples=2 00:32:59.177 iops : min= 5238, max= 6026, avg=5632.00, stdev=557.20, samples=2 00:32:59.177 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.02% 00:32:59.177 lat (msec) : 2=0.19%, 4=1.66%, 10=62.55%, 20=27.62%, 50=6.63% 00:32:59.177 lat (msec) : 100=1.29% 00:32:59.177 cpu : usr=3.68%, sys=5.97%, ctx=517, majf=0, minf=1 00:32:59.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:32:59.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:59.177 issued rwts: total=5416,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:59.177 00:32:59.177 Run status group 0 (all jobs): 00:32:59.177 READ: bw=56.9MiB/s (59.7MB/s), 6083KiB/s-21.0MiB/s (6229kB/s-22.1MB/s), io=57.5MiB (60.3MB), run=1006-1011msec 00:32:59.177 WRITE: bw=61.8MiB/s (64.8MB/s), 7446KiB/s-21.9MiB/s (7624kB/s-22.9MB/s), io=62.5MiB (65.5MB), run=1006-1011msec 00:32:59.177 00:32:59.177 Disk stats (read/write): 00:32:59.177 nvme0n1: ios=1074/1343, merge=0/0, ticks=11510/14079, in_queue=25589, util=82.77% 00:32:59.177 nvme0n2: ios=3604/3871, merge=0/0, ticks=34624/42732, in_queue=77356, util=100.00% 00:32:59.177 nvme0n3: ios=2589/2599, merge=0/0, ticks=23636/29395, in_queue=53031, util=98.92% 00:32:59.177 nvme0n4: ios=4764/5120, merge=0/0, ticks=47653/52303, in_queue=99956, util=100.00% 00:32:59.177 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:59.177 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=966011 00:32:59.177 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:59.177 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:59.177 [global] 00:32:59.177 thread=1 00:32:59.177 invalidate=1 00:32:59.177 rw=read 00:32:59.177 time_based=1 00:32:59.177 runtime=10 00:32:59.177 ioengine=libaio 00:32:59.177 direct=1 00:32:59.177 bs=4096 00:32:59.177 iodepth=1 00:32:59.177 norandommap=1 00:32:59.177 numjobs=1 00:32:59.177 00:32:59.177 [job0] 00:32:59.177 filename=/dev/nvme0n1 00:32:59.177 [job1] 00:32:59.177 filename=/dev/nvme0n2 00:32:59.177 [job2] 00:32:59.177 filename=/dev/nvme0n3 00:32:59.177 [job3] 00:32:59.177 filename=/dev/nvme0n4 00:32:59.177 Could not set queue depth (nvme0n1) 00:32:59.177 Could not set queue depth (nvme0n2) 00:32:59.177 Could not set queue depth (nvme0n3) 00:32:59.177 Could not set queue depth (nvme0n4) 00:32:59.435 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.435 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.435 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.435 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.435 fio-3.35 00:32:59.435 Starting 4 threads 00:33:01.963 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:02.221 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:02.221 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:33:02.221 fio: pid=966154, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:02.480 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=40964096, buflen=4096 00:33:02.480 fio: pid=966152, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:02.480 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.480 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:02.739 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=41795584, buflen=4096 00:33:02.739 fio: pid=966150, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:02.739 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.739 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:02.998 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48742400, buflen=4096 00:33:02.998 fio: pid=966151, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:02.998 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.998 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:02.998 00:33:02.998 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=966150: Tue Nov 26 07:42:30 2024 00:33:02.998 read: IOPS=3264, BW=12.8MiB/s (13.4MB/s)(39.9MiB/3126msec) 00:33:02.998 slat (usec): min=5, max=22307, avg=13.56, stdev=313.68 00:33:02.998 clat (usec): min=188, max=41270, avg=289.24, stdev=995.79 00:33:02.998 lat (usec): min=196, max=41277, avg=302.80, stdev=1044.41 00:33:02.998 clat percentiles (usec): 00:33:02.998 | 1.00th=[ 202], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:33:02.998 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 249], 00:33:02.998 | 70.00th=[ 258], 80.00th=[ 293], 90.00th=[ 371], 95.00th=[ 375], 00:33:02.998 | 99.00th=[ 379], 99.50th=[ 383], 99.90th=[ 545], 99.95th=[41157], 00:33:02.998 | 99.99th=[41157] 00:33:02.998 bw ( KiB/s): min= 8616, max=15064, per=34.07%, avg=13107.00, stdev=2355.42, samples=6 00:33:02.998 iops : min= 2154, max= 3766, avg=3276.67, stdev=588.78, samples=6 00:33:02.998 lat (usec) : 250=62.07%, 500=37.75%, 750=0.08% 00:33:02.998 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 50=0.06% 00:33:02.998 cpu : usr=1.12%, sys=4.10%, ctx=10209, majf=0, minf=1 00:33:02.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.998 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.998 issued rwts: total=10205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.998 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=966151: Tue Nov 26 07:42:30 2024 00:33:02.998 read: IOPS=3557, BW=13.9MiB/s (14.6MB/s)(46.5MiB/3345msec) 00:33:02.998 slat (usec): min=2, max=11717, avg=11.62, stdev=193.94 00:33:02.998 clat (usec): min=179, max=40940, avg=266.06, stdev=567.80 00:33:02.998 lat (usec): min=186, max=40947, avg=277.05, stdev=596.34 00:33:02.998 clat percentiles (usec): 00:33:02.998 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 225], 00:33:02.998 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:33:02.998 | 70.00th=[ 251], 80.00th=[ 281], 90.00th=[ 371], 95.00th=[ 375], 00:33:02.998 | 99.00th=[ 379], 99.50th=[ 383], 99.90th=[ 498], 99.95th=[ 1614], 00:33:02.998 | 99.99th=[40633] 00:33:02.998 bw ( KiB/s): min=11608, max=16192, per=36.53%, avg=14055.33, stdev=1843.73, samples=6 00:33:02.998 iops : min= 2902, max= 4048, avg=3513.83, stdev=460.93, samples=6 00:33:02.998 lat (usec) : 250=69.10%, 500=30.80%, 750=0.03% 00:33:02.998 lat (msec) : 2=0.02%, 20=0.02%, 50=0.02% 00:33:02.998 cpu : usr=1.94%, sys=4.40%, ctx=11906, majf=0, minf=2 00:33:02.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.998 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.998 issued rwts: total=11901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.998 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=966152: Tue Nov 26 07:42:30 2024 00:33:02.998 read: IOPS=3422, BW=13.4MiB/s (14.0MB/s)(39.1MiB/2922msec) 00:33:02.998 slat (nsec): min=6951, max=48066, avg=8217.54, stdev=1390.06 00:33:02.998 clat (usec): min=202, max=41228, avg=279.85, stdev=1076.54 00:33:02.998 lat (usec): min=210, max=41236, avg=288.06, stdev=1076.71 00:33:02.998 clat percentiles (usec): 00:33:02.998 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 241], 00:33:02.998 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:33:02.998 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:33:02.998 | 99.00th=[ 322], 99.50th=[ 351], 99.90th=[ 506], 99.95th=[40633], 00:33:02.998 | 99.99th=[41157] 00:33:02.998 bw ( KiB/s): min= 6496, max=15616, per=35.35%, avg=13598.40, stdev=3974.11, samples=5 00:33:02.998 iops : min= 1624, max= 3904, avg=3399.60, stdev=993.53, samples=5 00:33:02.998 lat (usec) : 250=55.33%, 500=44.55%, 750=0.03%, 1000=0.01% 00:33:02.998 lat (msec) : 50=0.07% 00:33:02.998 cpu : usr=2.05%, sys=5.37%, ctx=10002, majf=0, minf=2 00:33:02.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.998 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.998 issued rwts: total=10002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.998 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=966154: Tue Nov 26 07:42:30 2024 00:33:02.998 read: IOPS=24, BW=98.3KiB/s (101kB/s)(268KiB/2727msec) 00:33:02.998 slat (nsec): min=12106, max=37508, avg=23004.21, stdev=2468.94 00:33:02.998 clat (usec): min=442, max=41134, avg=40359.34, stdev=4951.06 00:33:02.998 lat (usec): min=479, max=41157, avg=40382.36, stdev=4949.28 00:33:02.998 clat percentiles (usec): 00:33:02.998 | 1.00th=[ 445], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:02.998 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:02.998 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:02.998 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:02.998 | 99.99th=[41157] 00:33:02.998 bw ( KiB/s): min= 96, max= 104, per=0.26%, avg=99.20, stdev= 4.38, samples=5 00:33:02.998 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:33:02.998 lat (usec) : 500=1.47% 00:33:02.998 lat (msec) : 50=97.06% 00:33:02.998 cpu : usr=0.15%, sys=0.00%, ctx=68, majf=0, minf=1 00:33:02.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.998 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.998 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.998 00:33:02.998 Run status group 0 (all jobs): 00:33:02.998 READ: bw=37.6MiB/s (39.4MB/s), 98.3KiB/s-13.9MiB/s (101kB/s-14.6MB/s), io=126MiB (132MB), run=2727-3345msec 00:33:02.998 00:33:02.998 Disk stats (read/write): 00:33:02.998 nvme0n1: ios=10200/0, merge=0/0, ticks=2863/0, in_queue=2863, util=93.84% 00:33:02.998 nvme0n2: ios=10926/0, merge=0/0, ticks=2874/0, in_queue=2874, util=94.92% 00:33:02.998 nvme0n3: ios=9822/0, merge=0/0, ticks=2633/0, in_queue=2633, util=96.52% 00:33:02.998 nvme0n4: ios=64/0, merge=0/0, ticks=2583/0, in_queue=2583, util=96.44% 00:33:03.256 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:03.256 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:03.256 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:03.256 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:03.515 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:03.515 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:03.773 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:03.773 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:04.031 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:04.031 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 966011 00:33:04.031 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:04.031 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:04.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:04.031 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:04.031 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:04.031 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:04.031 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:04.031 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:04.031 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:04.031 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:04.031 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:04.031 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:04.031 nvmf hotplug test: fio failed as expected 00:33:04.031 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:04.290 rmmod nvme_tcp 00:33:04.290 rmmod nvme_fabrics 00:33:04.290 rmmod nvme_keyring 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 963328 ']' 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 963328 00:33:04.290 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 963328 ']' 00:33:04.291 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 963328 00:33:04.291 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:04.291 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:04.291 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 963328 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 963328' 00:33:04.549 killing process with pid 963328 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 963328 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 963328 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.549 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:07.082 00:33:07.082 real 0m25.694s 00:33:07.082 user 1m30.555s 00:33:07.082 sys 0m11.491s 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:07.082 ************************************ 00:33:07.082 END TEST nvmf_fio_target 00:33:07.082 ************************************ 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:07.082 ************************************ 00:33:07.082 START TEST nvmf_bdevio 00:33:07.082 ************************************ 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:07.082 * Looking for test storage... 00:33:07.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:07.082 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:07.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.082 --rc genhtml_branch_coverage=1 00:33:07.082 --rc genhtml_function_coverage=1 00:33:07.082 --rc genhtml_legend=1 00:33:07.082 --rc geninfo_all_blocks=1 00:33:07.083 --rc geninfo_unexecuted_blocks=1 00:33:07.083 00:33:07.083 ' 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:07.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.083 --rc genhtml_branch_coverage=1 00:33:07.083 --rc genhtml_function_coverage=1 00:33:07.083 --rc genhtml_legend=1 00:33:07.083 --rc geninfo_all_blocks=1 00:33:07.083 --rc geninfo_unexecuted_blocks=1 00:33:07.083 00:33:07.083 ' 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:07.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.083 --rc genhtml_branch_coverage=1 00:33:07.083 --rc genhtml_function_coverage=1 00:33:07.083 --rc genhtml_legend=1 00:33:07.083 --rc geninfo_all_blocks=1 00:33:07.083 --rc geninfo_unexecuted_blocks=1 00:33:07.083 00:33:07.083 ' 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:07.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.083 --rc genhtml_branch_coverage=1 00:33:07.083 --rc genhtml_function_coverage=1 00:33:07.083 --rc genhtml_legend=1 00:33:07.083 --rc geninfo_all_blocks=1 00:33:07.083 --rc geninfo_unexecuted_blocks=1 00:33:07.083 00:33:07.083 ' 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:07.083 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:12.350 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:12.350 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:12.350 Found net devices under 0000:86:00.0: cvl_0_0 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:12.350 Found net devices under 0000:86:00.1: cvl_0_1 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:12.350 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:12.351 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:12.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:33:12.351 00:33:12.351 --- 10.0.0.2 ping statistics --- 00:33:12.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.351 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:33:12.351 00:33:12.351 --- 10.0.0.1 ping statistics --- 00:33:12.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.351 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=970389 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 970389 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 970389 ']' 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.351 [2024-11-26 07:42:40.126206] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:12.351 [2024-11-26 07:42:40.127121] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:33:12.351 [2024-11-26 07:42:40.127155] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.351 [2024-11-26 07:42:40.192189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:12.351 [2024-11-26 07:42:40.234844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.351 [2024-11-26 07:42:40.234880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.351 [2024-11-26 07:42:40.234887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.351 [2024-11-26 07:42:40.234893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.351 [2024-11-26 07:42:40.234899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.351 [2024-11-26 07:42:40.236368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:12.351 [2024-11-26 07:42:40.236477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:12.351 [2024-11-26 07:42:40.236583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:12.351 [2024-11-26 07:42:40.236584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:12.351 [2024-11-26 07:42:40.302770] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:12.351 [2024-11-26 07:42:40.303723] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:12.351 [2024-11-26 07:42:40.303745] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:12.351 [2024-11-26 07:42:40.304044] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:12.351 [2024-11-26 07:42:40.304100] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.351 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.351 [2024-11-26 07:42:40.361034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.352 Malloc0 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.352 [2024-11-26 07:42:40.425220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:12.352 { 00:33:12.352 "params": { 00:33:12.352 "name": "Nvme$subsystem", 00:33:12.352 "trtype": "$TEST_TRANSPORT", 00:33:12.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:12.352 "adrfam": "ipv4", 00:33:12.352 "trsvcid": "$NVMF_PORT", 00:33:12.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:12.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:12.352 "hdgst": ${hdgst:-false}, 00:33:12.352 "ddgst": ${ddgst:-false} 00:33:12.352 }, 00:33:12.352 "method": "bdev_nvme_attach_controller" 00:33:12.352 } 00:33:12.352 EOF 00:33:12.352 )") 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:12.352 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:12.609 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:12.609 "params": { 00:33:12.609 "name": "Nvme1", 00:33:12.609 "trtype": "tcp", 00:33:12.609 "traddr": "10.0.0.2", 00:33:12.609 "adrfam": "ipv4", 00:33:12.609 "trsvcid": "4420", 00:33:12.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:12.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:12.609 "hdgst": false, 00:33:12.609 "ddgst": false 00:33:12.609 }, 00:33:12.609 "method": "bdev_nvme_attach_controller" 00:33:12.609 }' 00:33:12.609 [2024-11-26 07:42:40.476478] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:33:12.610 [2024-11-26 07:42:40.476522] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970415 ] 00:33:12.610 [2024-11-26 07:42:40.539363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:12.610 [2024-11-26 07:42:40.583769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.610 [2024-11-26 07:42:40.583865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:12.610 [2024-11-26 07:42:40.583868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.867 I/O targets: 00:33:12.867 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:12.867 00:33:12.867 00:33:12.867 CUnit - A unit testing framework for C - Version 2.1-3 00:33:12.867 http://cunit.sourceforge.net/ 00:33:12.867 00:33:12.867 00:33:12.867 Suite: bdevio tests on: Nvme1n1 00:33:12.867 Test: blockdev write read block ...passed 00:33:12.867 Test: blockdev write zeroes read block ...passed 00:33:12.867 Test: blockdev write zeroes read no split ...passed 00:33:12.867 Test: blockdev write zeroes read split ...passed 00:33:12.867 Test: blockdev write zeroes read split partial ...passed 00:33:12.867 Test: blockdev reset ...[2024-11-26 07:42:40.919841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:12.867 [2024-11-26 07:42:40.919906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139a340 (9): Bad file descriptor 00:33:12.867 [2024-11-26 07:42:40.923361] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:12.867 passed 00:33:12.867 Test: blockdev write read 8 blocks ...passed 00:33:12.867 Test: blockdev write read size > 128k ...passed 00:33:12.867 Test: blockdev write read invalid size ...passed 00:33:13.124 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:13.124 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:13.124 Test: blockdev write read max offset ...passed 00:33:13.124 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:13.124 Test: blockdev writev readv 8 blocks ...passed 00:33:13.124 Test: blockdev writev readv 30 x 1block ...passed 00:33:13.124 Test: blockdev writev readv block ...passed 00:33:13.124 Test: blockdev writev readv size > 128k ...passed 00:33:13.124 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:13.124 Test: blockdev comparev and writev ...[2024-11-26 07:42:41.092824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.124 [2024-11-26 07:42:41.092854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.124 [2024-11-26 07:42:41.092869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.124 [2024-11-26 07:42:41.092877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:13.124 [2024-11-26 07:42:41.093188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.124 [2024-11-26 07:42:41.093200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:13.124 [2024-11-26 07:42:41.093212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.124 [2024-11-26 07:42:41.093220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:13.124 [2024-11-26 07:42:41.093508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.124 [2024-11-26 07:42:41.093520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:13.124 [2024-11-26 07:42:41.093532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.124 [2024-11-26 07:42:41.093540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:13.124 [2024-11-26 07:42:41.093834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.124 [2024-11-26 07:42:41.093845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:13.124 [2024-11-26 07:42:41.093857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.124 [2024-11-26 07:42:41.093865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:13.124 passed 00:33:13.124 Test: blockdev nvme passthru rw ...passed 00:33:13.124 Test: blockdev nvme passthru vendor specific ...[2024-11-26 07:42:41.176324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.124 [2024-11-26 07:42:41.176342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:13.124 [2024-11-26 07:42:41.176457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.124 [2024-11-26 07:42:41.176468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:13.124 [2024-11-26 07:42:41.176582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.124 [2024-11-26 07:42:41.176592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:13.124 [2024-11-26 07:42:41.176702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.124 [2024-11-26 07:42:41.176713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:13.124 passed 00:33:13.124 Test: blockdev nvme admin passthru ...passed 00:33:13.381 Test: blockdev copy ...passed 00:33:13.381 00:33:13.381 Run Summary: Type Total Ran Passed Failed Inactive 00:33:13.381 suites 1 1 n/a 0 0 00:33:13.381 tests 23 23 23 0 0 00:33:13.381 asserts 152 152 152 0 n/a 00:33:13.381 00:33:13.381 Elapsed time = 0.916 seconds 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:13.381 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:13.381 rmmod nvme_tcp 00:33:13.381 rmmod nvme_fabrics 00:33:13.382 rmmod nvme_keyring 00:33:13.382 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:13.382 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:13.382 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:13.382 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 970389 ']' 00:33:13.382 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 970389 00:33:13.382 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 970389 ']' 00:33:13.382 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 970389 00:33:13.382 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:13.382 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:13.382 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 970389 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 970389' 00:33:13.640 killing process with pid 970389 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 970389 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 970389 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:13.640 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.172 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:16.172 00:33:16.172 real 0m9.037s 00:33:16.172 user 0m7.657s 00:33:16.172 sys 0m4.613s 00:33:16.172 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.172 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:16.172 ************************************ 00:33:16.172 END TEST nvmf_bdevio 00:33:16.172 ************************************ 00:33:16.172 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:16.172 00:33:16.172 real 4m25.253s 00:33:16.172 user 9m0.619s 00:33:16.172 sys 1m47.238s 00:33:16.172 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.172 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:16.172 ************************************ 00:33:16.172 END TEST nvmf_target_core_interrupt_mode 00:33:16.172 ************************************ 00:33:16.172 07:42:43 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:16.172 07:42:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:16.172 07:42:43 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:16.172 07:42:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:16.172 ************************************ 00:33:16.172 START TEST nvmf_interrupt 00:33:16.172 ************************************ 00:33:16.172 07:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:16.172 * Looking for test storage... 00:33:16.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:16.172 07:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:16.172 07:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:33:16.172 07:42:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:16.172 07:42:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:16.172 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:16.172 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:16.172 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:16.172 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:16.172 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:16.172 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.173 --rc genhtml_branch_coverage=1 00:33:16.173 --rc genhtml_function_coverage=1 00:33:16.173 --rc genhtml_legend=1 00:33:16.173 --rc geninfo_all_blocks=1 00:33:16.173 --rc geninfo_unexecuted_blocks=1 00:33:16.173 00:33:16.173 ' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.173 --rc genhtml_branch_coverage=1 00:33:16.173 --rc genhtml_function_coverage=1 00:33:16.173 --rc genhtml_legend=1 00:33:16.173 --rc geninfo_all_blocks=1 00:33:16.173 --rc geninfo_unexecuted_blocks=1 00:33:16.173 00:33:16.173 ' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.173 --rc genhtml_branch_coverage=1 00:33:16.173 --rc genhtml_function_coverage=1 00:33:16.173 --rc genhtml_legend=1 00:33:16.173 --rc geninfo_all_blocks=1 00:33:16.173 --rc geninfo_unexecuted_blocks=1 00:33:16.173 00:33:16.173 ' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.173 --rc genhtml_branch_coverage=1 00:33:16.173 --rc genhtml_function_coverage=1 00:33:16.173 --rc genhtml_legend=1 00:33:16.173 --rc geninfo_all_blocks=1 00:33:16.173 --rc geninfo_unexecuted_blocks=1 00:33:16.173 00:33:16.173 ' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:16.173 07:42:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:21.615 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:21.615 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:21.615 Found net devices under 0000:86:00.0: cvl_0_0 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:21.615 Found net devices under 0000:86:00.1: cvl_0_1 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:21.615 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:21.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:33:21.616 00:33:21.616 --- 10.0.0.2 ping statistics --- 00:33:21.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.616 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:33:21.616 00:33:21.616 --- 10.0.0.1 ping statistics --- 00:33:21.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.616 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=973970 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 973970 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 973970 ']' 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:21.616 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.616 [2024-11-26 07:42:49.604017] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:21.616 [2024-11-26 07:42:49.604966] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:33:21.616 [2024-11-26 07:42:49.605001] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.616 [2024-11-26 07:42:49.672040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:21.875 [2024-11-26 07:42:49.714749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:21.875 [2024-11-26 07:42:49.714784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:21.875 [2024-11-26 07:42:49.714792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:21.875 [2024-11-26 07:42:49.714798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:21.875 [2024-11-26 07:42:49.714803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:21.875 [2024-11-26 07:42:49.715934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.875 [2024-11-26 07:42:49.715938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.876 [2024-11-26 07:42:49.782795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:21.876 [2024-11-26 07:42:49.783019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:21.876 [2024-11-26 07:42:49.783079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:21.876 5000+0 records in 00:33:21.876 5000+0 records out 00:33:21.876 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0176417 s, 580 MB/s 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.876 AIO0 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.876 [2024-11-26 07:42:49.908532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.876 [2024-11-26 07:42:49.945008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 973970 0 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 973970 0 idle 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=973970 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 973970 -w 256 00:33:21.876 07:42:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 973970 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.22 reactor_0' 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 973970 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.22 reactor_0 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 973970 1 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 973970 1 idle 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=973970 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 973970 -w 256 00:33:22.136 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:22.395 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 973987 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.00 reactor_1' 00:33:22.395 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 973987 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.00 reactor_1 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=974230 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 973970 0 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 973970 0 busy 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=973970 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 973970 -w 256 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:22.396 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 973970 root 20 0 128.2g 47616 34560 R 86.7 0.0 0:00.35 reactor_0' 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 973970 root 20 0 128.2g 47616 34560 R 86.7 0.0 0:00.35 reactor_0 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=86.7 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=86 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 973970 1 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 973970 1 busy 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=973970 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 973970 -w 256 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 973987 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.25 reactor_1' 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 973987 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.25 reactor_1 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:22.653 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:22.654 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:22.654 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:22.654 07:42:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.654 07:42:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 974230 00:33:32.623 Initializing NVMe Controllers 00:33:32.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:32.623 Controller IO queue size 256, less than required. 00:33:32.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:32.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:32.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:32.623 Initialization complete. Launching workers. 00:33:32.623 ======================================================== 00:33:32.623 Latency(us) 00:33:32.623 Device Information : IOPS MiB/s Average min max 00:33:32.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16269.83 63.55 15743.44 3073.71 20039.02 00:33:32.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16104.03 62.91 15903.75 4764.49 19661.17 00:33:32.623 ======================================================== 00:33:32.623 Total : 32373.86 126.46 15823.18 3073.71 20039.02 00:33:32.623 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 973970 0 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 973970 0 idle 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=973970 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 973970 -w 256 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 973970 root 20 0 128.2g 47616 34560 S 6.7 0.0 0:20.21 reactor_0' 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 973970 root 20 0 128.2g 47616 34560 S 6.7 0.0 0:20.21 reactor_0 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 973970 1 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 973970 1 idle 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=973970 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:32.623 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 973970 -w 256 00:33:32.882 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 973987 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:09.99 reactor_1' 00:33:32.883 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:32.883 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 973987 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:09.99 reactor_1 00:33:32.883 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:32.883 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:32.883 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:32.883 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:32.883 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:32.883 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:32.883 07:43:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:32.883 07:43:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:33.142 07:43:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:33.142 07:43:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:33.142 07:43:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:33.142 07:43:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:33.142 07:43:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 973970 0 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 973970 0 idle 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=973970 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 973970 -w 256 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 973970 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.37 reactor_0' 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 973970 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.37 reactor_0 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 973970 1 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 973970 1 idle 00:33:35.679 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=973970 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 973970 -w 256 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 973987 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.05 reactor_1' 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 973987 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.05 reactor_1 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:35.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:35.680 07:43:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:35.680 rmmod nvme_tcp 00:33:35.939 rmmod nvme_fabrics 00:33:35.939 rmmod nvme_keyring 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 973970 ']' 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 973970 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 973970 ']' 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 973970 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973970 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973970' 00:33:35.939 killing process with pid 973970 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 973970 00:33:35.939 07:43:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 973970 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:36.198 07:43:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.101 07:43:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.101 00:33:38.101 real 0m22.279s 00:33:38.101 user 0m39.311s 00:33:38.101 sys 0m8.127s 00:33:38.101 07:43:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.101 07:43:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:38.101 ************************************ 00:33:38.101 END TEST nvmf_interrupt 00:33:38.101 ************************************ 00:33:38.101 00:33:38.101 real 26m38.074s 00:33:38.101 user 55m37.996s 00:33:38.101 sys 8m54.487s 00:33:38.101 07:43:06 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.101 07:43:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.101 ************************************ 00:33:38.101 END TEST nvmf_tcp 00:33:38.101 ************************************ 00:33:38.360 07:43:06 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:38.360 07:43:06 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:38.360 07:43:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:38.360 07:43:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.360 07:43:06 -- common/autotest_common.sh@10 -- # set +x 00:33:38.360 ************************************ 00:33:38.360 START TEST spdkcli_nvmf_tcp 00:33:38.360 ************************************ 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:38.360 * Looking for test storage... 00:33:38.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:38.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.360 --rc genhtml_branch_coverage=1 00:33:38.360 --rc genhtml_function_coverage=1 00:33:38.360 --rc genhtml_legend=1 00:33:38.360 --rc geninfo_all_blocks=1 00:33:38.360 --rc geninfo_unexecuted_blocks=1 00:33:38.360 00:33:38.360 ' 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:38.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.360 --rc genhtml_branch_coverage=1 00:33:38.360 --rc genhtml_function_coverage=1 00:33:38.360 --rc genhtml_legend=1 00:33:38.360 --rc geninfo_all_blocks=1 00:33:38.360 --rc geninfo_unexecuted_blocks=1 00:33:38.360 00:33:38.360 ' 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:38.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.360 --rc genhtml_branch_coverage=1 00:33:38.360 --rc genhtml_function_coverage=1 00:33:38.360 --rc genhtml_legend=1 00:33:38.360 --rc geninfo_all_blocks=1 00:33:38.360 --rc geninfo_unexecuted_blocks=1 00:33:38.360 00:33:38.360 ' 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:38.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.360 --rc genhtml_branch_coverage=1 00:33:38.360 --rc genhtml_function_coverage=1 00:33:38.360 --rc genhtml_legend=1 00:33:38.360 --rc geninfo_all_blocks=1 00:33:38.360 --rc geninfo_unexecuted_blocks=1 00:33:38.360 00:33:38.360 ' 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.360 07:43:06 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=976909 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 976909 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 976909 ']' 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.361 07:43:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:38.620 [2024-11-26 07:43:06.476113] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:33:38.620 [2024-11-26 07:43:06.476162] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid976909 ] 00:33:38.620 [2024-11-26 07:43:06.537893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:38.620 [2024-11-26 07:43:06.582083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.620 [2024-11-26 07:43:06.582088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.620 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.620 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:38.620 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:38.620 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:38.620 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.620 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:38.620 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:38.620 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:38.620 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.620 07:43:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.879 07:43:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:38.879 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:38.879 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:38.879 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:38.879 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:38.879 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:38.879 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:38.879 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:38.879 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:38.879 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:38.879 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:38.879 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:38.879 ' 00:33:41.406 [2024-11-26 07:43:09.188931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.342 [2024-11-26 07:43:10.409030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:44.880 [2024-11-26 07:43:12.655937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:46.786 [2024-11-26 07:43:14.585929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:48.161 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:48.161 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:48.162 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:48.162 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:48.162 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:48.162 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:48.162 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:48.162 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:48.162 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:48.162 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:48.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:48.162 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:48.162 07:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:48.162 07:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:48.162 07:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:48.162 07:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:48.162 07:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:48.162 07:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:48.162 07:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:48.162 07:43:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:48.730 07:43:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:48.730 07:43:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:48.730 07:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:48.730 07:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:48.731 07:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:48.731 07:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:48.731 07:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:48.731 07:43:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:48.731 07:43:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:48.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:48.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:48.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:48.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:48.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:48.731 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:48.731 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:48.731 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:48.731 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:48.731 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:48.731 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:48.731 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:48.731 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:48.731 ' 00:33:54.001 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:54.001 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:54.001 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:54.001 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:54.001 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:54.001 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:54.001 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:54.001 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:54.001 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:54.001 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:54.001 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:54.001 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:54.001 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:54.001 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 976909 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 976909 ']' 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 976909 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 976909 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 976909' 00:33:54.001 killing process with pid 976909 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 976909 00:33:54.001 07:43:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 976909 00:33:54.001 07:43:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:54.001 07:43:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:54.001 07:43:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 976909 ']' 00:33:54.001 07:43:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 976909 00:33:54.001 07:43:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 976909 ']' 00:33:54.001 07:43:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 976909 00:33:54.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (976909) - No such process 00:33:54.001 07:43:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 976909 is not found' 00:33:54.001 Process with pid 976909 is not found 00:33:54.001 07:43:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:54.001 07:43:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:54.001 07:43:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:54.001 00:33:54.002 real 0m15.812s 00:33:54.002 user 0m32.974s 00:33:54.002 sys 0m0.661s 00:33:54.002 07:43:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:54.002 07:43:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:54.002 ************************************ 00:33:54.002 END TEST spdkcli_nvmf_tcp 00:33:54.002 ************************************ 00:33:54.002 07:43:22 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:54.002 07:43:22 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:54.002 07:43:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:54.002 07:43:22 -- common/autotest_common.sh@10 -- # set +x 00:33:54.261 ************************************ 00:33:54.261 START TEST nvmf_identify_passthru 00:33:54.261 ************************************ 00:33:54.261 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:54.261 * Looking for test storage... 00:33:54.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:54.261 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:54.261 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:33:54.261 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:54.261 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:54.261 07:43:22 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:54.261 07:43:22 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:54.261 07:43:22 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:54.262 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:54.262 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:54.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.262 --rc genhtml_branch_coverage=1 00:33:54.262 --rc genhtml_function_coverage=1 00:33:54.262 --rc genhtml_legend=1 00:33:54.262 --rc geninfo_all_blocks=1 00:33:54.262 --rc geninfo_unexecuted_blocks=1 00:33:54.262 00:33:54.262 ' 00:33:54.262 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:54.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.262 --rc genhtml_branch_coverage=1 00:33:54.262 --rc genhtml_function_coverage=1 00:33:54.262 --rc genhtml_legend=1 00:33:54.262 --rc geninfo_all_blocks=1 00:33:54.262 --rc geninfo_unexecuted_blocks=1 00:33:54.262 00:33:54.262 ' 00:33:54.262 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:54.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.262 --rc genhtml_branch_coverage=1 00:33:54.262 --rc genhtml_function_coverage=1 00:33:54.262 --rc genhtml_legend=1 00:33:54.262 --rc geninfo_all_blocks=1 00:33:54.262 --rc geninfo_unexecuted_blocks=1 00:33:54.262 00:33:54.262 ' 00:33:54.262 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:54.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.262 --rc genhtml_branch_coverage=1 00:33:54.262 --rc genhtml_function_coverage=1 00:33:54.262 --rc genhtml_legend=1 00:33:54.262 --rc geninfo_all_blocks=1 00:33:54.262 --rc geninfo_unexecuted_blocks=1 00:33:54.262 00:33:54.262 ' 00:33:54.262 07:43:22 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.262 07:43:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.262 07:43:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.262 07:43:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.262 07:43:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:54.262 07:43:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:54.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:54.262 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:54.262 07:43:22 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.262 07:43:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.263 07:43:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.263 07:43:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.263 07:43:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.263 07:43:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.263 07:43:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:54.263 07:43:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.263 07:43:22 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:54.263 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:54.263 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.263 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:54.263 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:54.263 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:54.263 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.263 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:54.263 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.263 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:54.263 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:54.263 07:43:22 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:54.263 07:43:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:59.529 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:59.529 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:59.529 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:59.530 Found net devices under 0000:86:00.0: cvl_0_0 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:59.530 Found net devices under 0000:86:00.1: cvl_0_1 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:59.530 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:59.789 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:59.789 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:59.789 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:59.789 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:59.789 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:59.789 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:59.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:59.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:33:59.790 00:33:59.790 --- 10.0.0.2 ping statistics --- 00:33:59.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.790 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:59.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:59.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:33:59.790 00:33:59.790 --- 10.0.0.1 ping statistics --- 00:33:59.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.790 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:59.790 07:43:27 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:59.790 07:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.790 07:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:59.790 07:43:27 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:59.790 07:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:59.790 07:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:59.790 07:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:59.790 07:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:59.790 07:43:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:03.974 07:43:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:34:03.974 07:43:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:03.974 07:43:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:03.974 07:43:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:08.161 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:08.161 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:08.161 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:08.161 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.161 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:08.161 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:08.161 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.161 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:08.161 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=983915 00:34:08.161 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:08.161 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 983915 00:34:08.161 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 983915 ']' 00:34:08.161 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.161 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:08.161 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.161 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:08.161 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.419 [2024-11-26 07:43:36.283889] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:34:08.419 [2024-11-26 07:43:36.283934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.419 [2024-11-26 07:43:36.348162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:08.419 [2024-11-26 07:43:36.392322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.419 [2024-11-26 07:43:36.392359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.419 [2024-11-26 07:43:36.392366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.419 [2024-11-26 07:43:36.392372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.419 [2024-11-26 07:43:36.392377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.419 [2024-11-26 07:43:36.393785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.419 [2024-11-26 07:43:36.393886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:08.419 [2024-11-26 07:43:36.393986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:08.419 [2024-11-26 07:43:36.393988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.419 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:08.419 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:08.419 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:08.419 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.419 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.419 INFO: Log level set to 20 00:34:08.419 INFO: Requests: 00:34:08.419 { 00:34:08.419 "jsonrpc": "2.0", 00:34:08.419 "method": "nvmf_set_config", 00:34:08.419 "id": 1, 00:34:08.419 "params": { 00:34:08.419 "admin_cmd_passthru": { 00:34:08.419 "identify_ctrlr": true 00:34:08.419 } 00:34:08.419 } 00:34:08.419 } 00:34:08.419 00:34:08.419 INFO: response: 00:34:08.419 { 00:34:08.419 "jsonrpc": "2.0", 00:34:08.419 "id": 1, 00:34:08.419 "result": true 00:34:08.419 } 00:34:08.419 00:34:08.419 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.419 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:08.419 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.419 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.419 INFO: Setting log level to 20 00:34:08.419 INFO: Setting log level to 20 00:34:08.419 INFO: Log level set to 20 00:34:08.419 INFO: Log level set to 20 00:34:08.419 INFO: Requests: 00:34:08.419 { 00:34:08.419 "jsonrpc": "2.0", 00:34:08.419 "method": "framework_start_init", 00:34:08.419 "id": 1 00:34:08.419 } 00:34:08.419 00:34:08.419 INFO: Requests: 00:34:08.419 { 00:34:08.419 "jsonrpc": "2.0", 00:34:08.419 "method": "framework_start_init", 00:34:08.420 "id": 1 00:34:08.420 } 00:34:08.420 00:34:08.677 [2024-11-26 07:43:36.529176] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:08.677 INFO: response: 00:34:08.677 { 00:34:08.677 "jsonrpc": "2.0", 00:34:08.677 "id": 1, 00:34:08.677 "result": true 00:34:08.677 } 00:34:08.677 00:34:08.677 INFO: response: 00:34:08.677 { 00:34:08.677 "jsonrpc": "2.0", 00:34:08.677 "id": 1, 00:34:08.677 "result": true 00:34:08.677 } 00:34:08.677 00:34:08.677 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.677 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:08.677 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.677 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.677 INFO: Setting log level to 40 00:34:08.677 INFO: Setting log level to 40 00:34:08.677 INFO: Setting log level to 40 00:34:08.677 [2024-11-26 07:43:36.542501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.677 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.677 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:08.677 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:08.677 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.677 07:43:36 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:08.677 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.677 07:43:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.960 Nvme0n1 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.960 [2024-11-26 07:43:39.452989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.960 [ 00:34:11.960 { 00:34:11.960 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:11.960 "subtype": "Discovery", 00:34:11.960 "listen_addresses": [], 00:34:11.960 "allow_any_host": true, 00:34:11.960 "hosts": [] 00:34:11.960 }, 00:34:11.960 { 00:34:11.960 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:11.960 "subtype": "NVMe", 00:34:11.960 "listen_addresses": [ 00:34:11.960 { 00:34:11.960 "trtype": "TCP", 00:34:11.960 "adrfam": "IPv4", 00:34:11.960 "traddr": "10.0.0.2", 00:34:11.960 "trsvcid": "4420" 00:34:11.960 } 00:34:11.960 ], 00:34:11.960 "allow_any_host": true, 00:34:11.960 "hosts": [], 00:34:11.960 "serial_number": "SPDK00000000000001", 00:34:11.960 "model_number": "SPDK bdev Controller", 00:34:11.960 "max_namespaces": 1, 00:34:11.960 "min_cntlid": 1, 00:34:11.960 "max_cntlid": 65519, 00:34:11.960 "namespaces": [ 00:34:11.960 { 00:34:11.960 "nsid": 1, 00:34:11.960 "bdev_name": "Nvme0n1", 00:34:11.960 "name": "Nvme0n1", 00:34:11.960 "nguid": "0775DEBBF853400980451FE5A21FAEE0", 00:34:11.960 "uuid": "0775debb-f853-4009-8045-1fe5a21faee0" 00:34:11.960 } 00:34:11.960 ] 00:34:11.960 } 00:34:11.960 ] 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:11.960 07:43:39 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:11.960 07:43:39 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:11.960 07:43:39 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:11.960 07:43:39 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:11.960 07:43:39 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:11.960 07:43:39 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:11.960 07:43:39 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:11.960 rmmod nvme_tcp 00:34:11.960 rmmod nvme_fabrics 00:34:11.960 rmmod nvme_keyring 00:34:11.960 07:43:39 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:11.960 07:43:39 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:11.960 07:43:39 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:11.960 07:43:39 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 983915 ']' 00:34:11.960 07:43:39 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 983915 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 983915 ']' 00:34:11.960 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 983915 00:34:11.961 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:11.961 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:11.961 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 983915 00:34:11.961 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:11.961 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:11.961 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 983915' 00:34:11.961 killing process with pid 983915 00:34:11.961 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 983915 00:34:11.961 07:43:39 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 983915 00:34:13.862 07:43:41 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:13.862 07:43:41 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:13.862 07:43:41 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:13.862 07:43:41 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:13.862 07:43:41 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:13.862 07:43:41 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:13.862 07:43:41 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:13.863 07:43:41 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:13.863 07:43:41 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:13.863 07:43:41 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.863 07:43:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:13.863 07:43:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.768 07:43:43 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:15.768 00:34:15.768 real 0m21.431s 00:34:15.768 user 0m26.852s 00:34:15.768 sys 0m5.885s 00:34:15.768 07:43:43 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:15.768 07:43:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.768 ************************************ 00:34:15.768 END TEST nvmf_identify_passthru 00:34:15.768 ************************************ 00:34:15.768 07:43:43 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:15.768 07:43:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:15.768 07:43:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:15.768 07:43:43 -- common/autotest_common.sh@10 -- # set +x 00:34:15.768 ************************************ 00:34:15.768 START TEST nvmf_dif 00:34:15.768 ************************************ 00:34:15.768 07:43:43 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:15.768 * Looking for test storage... 00:34:15.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:15.768 07:43:43 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:15.768 07:43:43 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:34:15.768 07:43:43 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:15.768 07:43:43 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:15.768 07:43:43 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:15.768 07:43:43 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:15.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.768 --rc genhtml_branch_coverage=1 00:34:15.768 --rc genhtml_function_coverage=1 00:34:15.768 --rc genhtml_legend=1 00:34:15.768 --rc geninfo_all_blocks=1 00:34:15.768 --rc geninfo_unexecuted_blocks=1 00:34:15.768 00:34:15.768 ' 00:34:15.768 07:43:43 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:15.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.768 --rc genhtml_branch_coverage=1 00:34:15.768 --rc genhtml_function_coverage=1 00:34:15.768 --rc genhtml_legend=1 00:34:15.768 --rc geninfo_all_blocks=1 00:34:15.768 --rc geninfo_unexecuted_blocks=1 00:34:15.768 00:34:15.768 ' 00:34:15.768 07:43:43 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:15.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.768 --rc genhtml_branch_coverage=1 00:34:15.768 --rc genhtml_function_coverage=1 00:34:15.768 --rc genhtml_legend=1 00:34:15.768 --rc geninfo_all_blocks=1 00:34:15.768 --rc geninfo_unexecuted_blocks=1 00:34:15.768 00:34:15.768 ' 00:34:15.768 07:43:43 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:15.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.768 --rc genhtml_branch_coverage=1 00:34:15.768 --rc genhtml_function_coverage=1 00:34:15.768 --rc genhtml_legend=1 00:34:15.768 --rc geninfo_all_blocks=1 00:34:15.768 --rc geninfo_unexecuted_blocks=1 00:34:15.768 00:34:15.768 ' 00:34:15.768 07:43:43 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:15.768 07:43:43 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:15.768 07:43:43 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.768 07:43:43 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.768 07:43:43 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.768 07:43:43 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:15.768 07:43:43 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:15.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:15.768 07:43:43 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:15.768 07:43:43 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:15.769 07:43:43 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:15.769 07:43:43 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:15.769 07:43:43 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:15.769 07:43:43 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:15.769 07:43:43 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:15.769 07:43:43 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:15.769 07:43:43 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:15.769 07:43:43 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:15.769 07:43:43 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:15.769 07:43:43 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.769 07:43:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:15.769 07:43:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.769 07:43:43 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:15.769 07:43:43 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:15.769 07:43:43 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:15.769 07:43:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:21.040 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:21.040 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:21.040 Found net devices under 0000:86:00.0: cvl_0_0 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:21.040 Found net devices under 0000:86:00.1: cvl_0_1 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:21.040 07:43:48 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:21.040 07:43:49 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:21.040 07:43:49 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:21.040 07:43:49 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:21.040 07:43:49 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:21.040 07:43:49 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:21.299 07:43:49 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:21.299 07:43:49 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:21.299 07:43:49 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:21.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:21.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:34:21.299 00:34:21.299 --- 10.0.0.2 ping statistics --- 00:34:21.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.299 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:34:21.299 07:43:49 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:21.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:21.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:34:21.299 00:34:21.299 --- 10.0.0.1 ping statistics --- 00:34:21.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.299 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:34:21.299 07:43:49 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:21.299 07:43:49 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:21.299 07:43:49 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:21.299 07:43:49 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:23.843 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:23.843 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:23.843 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:23.843 07:43:51 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:23.843 07:43:51 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:23.843 07:43:51 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:23.843 07:43:51 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:23.843 07:43:51 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:23.843 07:43:51 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:23.843 07:43:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:23.843 07:43:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:23.843 07:43:51 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:23.843 07:43:51 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.843 07:43:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:23.843 07:43:51 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=989193 00:34:23.843 07:43:51 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 989193 00:34:23.843 07:43:51 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:23.843 07:43:51 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 989193 ']' 00:34:23.843 07:43:51 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.843 07:43:51 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:23.843 07:43:51 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.843 07:43:51 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:23.843 07:43:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:23.843 [2024-11-26 07:43:51.784301] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:34:23.843 [2024-11-26 07:43:51.784348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.843 [2024-11-26 07:43:51.852704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.843 [2024-11-26 07:43:51.892559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.843 [2024-11-26 07:43:51.892598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.843 [2024-11-26 07:43:51.892605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.843 [2024-11-26 07:43:51.892611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.843 [2024-11-26 07:43:51.892616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.843 [2024-11-26 07:43:51.893194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.103 07:43:51 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:24.103 07:43:51 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:24.103 07:43:51 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:24.103 07:43:51 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:24.103 07:43:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:24.103 07:43:52 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:24.103 07:43:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:24.103 07:43:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:24.103 07:43:52 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.103 07:43:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:24.103 [2024-11-26 07:43:52.029307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:24.103 07:43:52 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.103 07:43:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:24.103 07:43:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:24.103 07:43:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:24.103 07:43:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:24.103 ************************************ 00:34:24.103 START TEST fio_dif_1_default 00:34:24.103 ************************************ 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:24.103 bdev_null0 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:24.103 [2024-11-26 07:43:52.101648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:24.103 { 00:34:24.103 "params": { 00:34:24.103 "name": "Nvme$subsystem", 00:34:24.103 "trtype": "$TEST_TRANSPORT", 00:34:24.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:24.103 "adrfam": "ipv4", 00:34:24.103 "trsvcid": "$NVMF_PORT", 00:34:24.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:24.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:24.103 "hdgst": ${hdgst:-false}, 00:34:24.103 "ddgst": ${ddgst:-false} 00:34:24.103 }, 00:34:24.103 "method": "bdev_nvme_attach_controller" 00:34:24.103 } 00:34:24.103 EOF 00:34:24.103 )") 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:24.103 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:24.104 "params": { 00:34:24.104 "name": "Nvme0", 00:34:24.104 "trtype": "tcp", 00:34:24.104 "traddr": "10.0.0.2", 00:34:24.104 "adrfam": "ipv4", 00:34:24.104 "trsvcid": "4420", 00:34:24.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:24.104 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:24.104 "hdgst": false, 00:34:24.104 "ddgst": false 00:34:24.104 }, 00:34:24.104 "method": "bdev_nvme_attach_controller" 00:34:24.104 }' 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:24.104 07:43:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.671 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:24.671 fio-3.35 00:34:24.671 Starting 1 thread 00:34:36.879 00:34:36.879 filename0: (groupid=0, jobs=1): err= 0: pid=989555: Tue Nov 26 07:44:02 2024 00:34:36.879 read: IOPS=188, BW=753KiB/s (771kB/s)(7536KiB/10008msec) 00:34:36.879 slat (nsec): min=5883, max=25637, avg=6184.99, stdev=708.74 00:34:36.879 clat (usec): min=399, max=46434, avg=21230.20, stdev=20704.74 00:34:36.879 lat (usec): min=405, max=46459, avg=21236.39, stdev=20704.71 00:34:36.879 clat percentiles (usec): 00:34:36.879 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 453], 20.00th=[ 482], 00:34:36.879 | 30.00th=[ 490], 40.00th=[ 545], 50.00th=[40633], 60.00th=[41681], 00:34:36.879 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:34:36.879 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:34:36.879 | 99.99th=[46400] 00:34:36.879 bw ( KiB/s): min= 704, max= 768, per=99.87%, avg=752.00, stdev=28.43, samples=20 00:34:36.879 iops : min= 176, max= 192, avg=188.00, stdev= 7.11, samples=20 00:34:36.879 lat (usec) : 500=36.73%, 750=13.16% 00:34:36.879 lat (msec) : 50=50.11% 00:34:36.879 cpu : usr=92.11%, sys=7.65%, ctx=12, majf=0, minf=0 00:34:36.879 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.879 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.879 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:36.879 00:34:36.879 Run status group 0 (all jobs): 00:34:36.879 READ: bw=753KiB/s (771kB/s), 753KiB/s-753KiB/s (771kB/s-771kB/s), io=7536KiB (7717kB), run=10008-10008msec 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.879 00:34:36.879 real 0m11.109s 00:34:36.879 user 0m15.832s 00:34:36.879 sys 0m1.063s 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:36.879 ************************************ 00:34:36.879 END TEST fio_dif_1_default 00:34:36.879 ************************************ 00:34:36.879 07:44:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:36.879 07:44:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:36.879 07:44:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.879 07:44:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.879 ************************************ 00:34:36.879 START TEST fio_dif_1_multi_subsystems 00:34:36.879 ************************************ 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:36.879 bdev_null0 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:36.879 [2024-11-26 07:44:03.289492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:36.879 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:36.880 bdev_null1 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:36.880 { 00:34:36.880 "params": { 00:34:36.880 "name": "Nvme$subsystem", 00:34:36.880 "trtype": "$TEST_TRANSPORT", 00:34:36.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.880 "adrfam": "ipv4", 00:34:36.880 "trsvcid": "$NVMF_PORT", 00:34:36.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.880 "hdgst": ${hdgst:-false}, 00:34:36.880 "ddgst": ${ddgst:-false} 00:34:36.880 }, 00:34:36.880 "method": "bdev_nvme_attach_controller" 00:34:36.880 } 00:34:36.880 EOF 00:34:36.880 )") 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:36.880 { 00:34:36.880 "params": { 00:34:36.880 "name": "Nvme$subsystem", 00:34:36.880 "trtype": "$TEST_TRANSPORT", 00:34:36.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.880 "adrfam": "ipv4", 00:34:36.880 "trsvcid": "$NVMF_PORT", 00:34:36.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.880 "hdgst": ${hdgst:-false}, 00:34:36.880 "ddgst": ${ddgst:-false} 00:34:36.880 }, 00:34:36.880 "method": "bdev_nvme_attach_controller" 00:34:36.880 } 00:34:36.880 EOF 00:34:36.880 )") 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:36.880 "params": { 00:34:36.880 "name": "Nvme0", 00:34:36.880 "trtype": "tcp", 00:34:36.880 "traddr": "10.0.0.2", 00:34:36.880 "adrfam": "ipv4", 00:34:36.880 "trsvcid": "4420", 00:34:36.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:36.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:36.880 "hdgst": false, 00:34:36.880 "ddgst": false 00:34:36.880 }, 00:34:36.880 "method": "bdev_nvme_attach_controller" 00:34:36.880 },{ 00:34:36.880 "params": { 00:34:36.880 "name": "Nvme1", 00:34:36.880 "trtype": "tcp", 00:34:36.880 "traddr": "10.0.0.2", 00:34:36.880 "adrfam": "ipv4", 00:34:36.880 "trsvcid": "4420", 00:34:36.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:36.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:36.880 "hdgst": false, 00:34:36.880 "ddgst": false 00:34:36.880 }, 00:34:36.880 "method": "bdev_nvme_attach_controller" 00:34:36.880 }' 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:36.880 07:44:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.880 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:36.880 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:36.880 fio-3.35 00:34:36.880 Starting 2 threads 00:34:46.857 00:34:46.857 filename0: (groupid=0, jobs=1): err= 0: pid=991519: Tue Nov 26 07:44:14 2024 00:34:46.857 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10006msec) 00:34:46.857 slat (nsec): min=6157, max=38208, avg=9765.72, stdev=6039.34 00:34:46.857 clat (usec): min=40807, max=42139, avg=41315.61, stdev=475.37 00:34:46.857 lat (usec): min=40814, max=42161, avg=41325.38, stdev=475.31 00:34:46.857 clat percentiles (usec): 00:34:46.857 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:46.857 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:46.857 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:46.857 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:46.857 | 99.99th=[42206] 00:34:46.857 bw ( KiB/s): min= 352, max= 416, per=33.80%, avg=385.60, stdev=12.61, samples=20 00:34:46.857 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:34:46.857 lat (msec) : 50=100.00% 00:34:46.857 cpu : usr=96.95%, sys=2.78%, ctx=16, majf=0, minf=26 00:34:46.857 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.857 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.857 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:46.857 filename1: (groupid=0, jobs=1): err= 0: pid=991520: Tue Nov 26 07:44:14 2024 00:34:46.857 read: IOPS=188, BW=753KiB/s (771kB/s)(7552KiB/10030msec) 00:34:46.857 slat (nsec): min=6188, max=59526, avg=9165.63, stdev=6310.68 00:34:46.857 clat (usec): min=441, max=42907, avg=21221.94, stdev=20525.66 00:34:46.857 lat (usec): min=447, max=42920, avg=21231.10, stdev=20523.81 00:34:46.857 clat percentiles (usec): 00:34:46.857 | 1.00th=[ 461], 5.00th=[ 506], 10.00th=[ 594], 20.00th=[ 627], 00:34:46.857 | 30.00th=[ 644], 40.00th=[ 652], 50.00th=[41157], 60.00th=[41157], 00:34:46.857 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:34:46.857 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:46.857 | 99.99th=[42730] 00:34:46.857 bw ( KiB/s): min= 672, max= 769, per=66.11%, avg=753.80, stdev=30.22, samples=20 00:34:46.857 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:34:46.857 lat (usec) : 500=4.34%, 750=43.96%, 1000=1.48% 00:34:46.857 lat (msec) : 50=50.21% 00:34:46.857 cpu : usr=98.46%, sys=1.25%, ctx=29, majf=0, minf=98 00:34:46.857 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.857 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.857 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:46.857 00:34:46.857 Run status group 0 (all jobs): 00:34:46.857 READ: bw=1139KiB/s (1166kB/s), 387KiB/s-753KiB/s (396kB/s-771kB/s), io=11.2MiB (11.7MB), run=10006-10030msec 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.857 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:46.858 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.858 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:46.858 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.858 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:46.858 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.858 00:34:46.858 real 0m11.446s 00:34:46.858 user 0m26.311s 00:34:46.858 sys 0m0.709s 00:34:46.858 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.858 07:44:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:46.858 ************************************ 00:34:46.858 END TEST fio_dif_1_multi_subsystems 00:34:46.858 ************************************ 00:34:46.858 07:44:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:46.858 07:44:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:46.858 07:44:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:46.858 07:44:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:46.858 ************************************ 00:34:46.858 START TEST fio_dif_rand_params 00:34:46.858 ************************************ 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:46.858 bdev_null0 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:46.858 [2024-11-26 07:44:14.811176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:46.858 { 00:34:46.858 "params": { 00:34:46.858 "name": "Nvme$subsystem", 00:34:46.858 "trtype": "$TEST_TRANSPORT", 00:34:46.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:46.858 "adrfam": "ipv4", 00:34:46.858 "trsvcid": "$NVMF_PORT", 00:34:46.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:46.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:46.858 "hdgst": ${hdgst:-false}, 00:34:46.858 "ddgst": ${ddgst:-false} 00:34:46.858 }, 00:34:46.858 "method": "bdev_nvme_attach_controller" 00:34:46.858 } 00:34:46.858 EOF 00:34:46.858 )") 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:46.858 "params": { 00:34:46.858 "name": "Nvme0", 00:34:46.858 "trtype": "tcp", 00:34:46.858 "traddr": "10.0.0.2", 00:34:46.858 "adrfam": "ipv4", 00:34:46.858 "trsvcid": "4420", 00:34:46.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:46.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:46.858 "hdgst": false, 00:34:46.858 "ddgst": false 00:34:46.858 }, 00:34:46.858 "method": "bdev_nvme_attach_controller" 00:34:46.858 }' 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:46.858 07:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:47.117 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:47.117 ... 00:34:47.117 fio-3.35 00:34:47.117 Starting 3 threads 00:34:53.682 00:34:53.682 filename0: (groupid=0, jobs=1): err= 0: pid=993484: Tue Nov 26 07:44:20 2024 00:34:53.682 read: IOPS=338, BW=42.3MiB/s (44.3MB/s)(213MiB/5046msec) 00:34:53.682 slat (nsec): min=6240, max=42112, avg=13139.71, stdev=6323.65 00:34:53.682 clat (usec): min=3191, max=50731, avg=8825.97, stdev=6279.88 00:34:53.682 lat (usec): min=3202, max=50743, avg=8839.11, stdev=6279.84 00:34:53.682 clat percentiles (usec): 00:34:53.682 | 1.00th=[ 4424], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 6980], 00:34:53.682 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8291], 00:34:53.682 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[ 9896], 00:34:53.682 | 99.00th=[47973], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:34:53.682 | 99.99th=[50594] 00:34:53.682 bw ( KiB/s): min=33024, max=48640, per=35.97%, avg=43648.00, stdev=4499.66, samples=10 00:34:53.682 iops : min= 258, max= 380, avg=341.00, stdev=35.15, samples=10 00:34:53.682 lat (msec) : 4=0.76%, 10=94.67%, 20=2.17%, 50=2.23%, 100=0.18% 00:34:53.682 cpu : usr=96.31%, sys=3.37%, ctx=7, majf=0, minf=9 00:34:53.682 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.682 issued rwts: total=1707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.682 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:53.682 filename0: (groupid=0, jobs=1): err= 0: pid=993485: Tue Nov 26 07:44:20 2024 00:34:53.682 read: IOPS=301, BW=37.7MiB/s (39.5MB/s)(189MiB/5003msec) 00:34:53.682 slat (nsec): min=6242, max=76348, avg=15145.52, stdev=6662.41 00:34:53.682 clat (usec): min=3642, max=51920, avg=9933.66, stdev=5590.63 00:34:53.682 lat (usec): min=3648, max=51932, avg=9948.80, stdev=5590.82 00:34:53.682 clat percentiles (usec): 00:34:53.682 | 1.00th=[ 5211], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7635], 00:34:53.682 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10028], 00:34:53.682 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11469], 95.00th=[12125], 00:34:53.682 | 99.00th=[49021], 99.50th=[50070], 99.90th=[52167], 99.95th=[52167], 00:34:53.682 | 99.99th=[52167] 00:34:53.682 bw ( KiB/s): min=31488, max=44544, per=32.09%, avg=38940.44, stdev=4836.04, samples=9 00:34:53.682 iops : min= 246, max= 348, avg=304.22, stdev=37.78, samples=9 00:34:53.682 lat (msec) : 4=0.60%, 10=59.55%, 20=38.06%, 50=1.26%, 100=0.53% 00:34:53.682 cpu : usr=95.24%, sys=4.42%, ctx=18, majf=0, minf=10 00:34:53.682 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.682 issued rwts: total=1508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.682 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:53.682 filename0: (groupid=0, jobs=1): err= 0: pid=993486: Tue Nov 26 07:44:20 2024 00:34:53.682 read: IOPS=311, BW=38.9MiB/s (40.8MB/s)(196MiB/5044msec) 00:34:53.682 slat (nsec): min=6269, max=43799, avg=14514.46, stdev=6584.43 00:34:53.682 clat (usec): min=3248, max=50123, avg=9600.94, stdev=5056.28 00:34:53.682 lat (usec): min=3256, max=50135, avg=9615.45, stdev=5056.49 00:34:53.682 clat percentiles (usec): 00:34:53.682 | 1.00th=[ 4146], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 7439], 00:34:53.682 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:34:53.682 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11338], 95.00th=[11994], 00:34:53.682 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070], 00:34:53.682 | 99.99th=[50070] 00:34:53.682 bw ( KiB/s): min=35328, max=45056, per=33.06%, avg=40115.20, stdev=3204.38, samples=10 00:34:53.682 iops : min= 276, max= 352, avg=313.40, stdev=25.03, samples=10 00:34:53.682 lat (msec) : 4=0.96%, 10=66.41%, 20=31.17%, 50=1.34%, 100=0.13% 00:34:53.683 cpu : usr=96.27%, sys=3.41%, ctx=10, majf=0, minf=9 00:34:53.683 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.683 issued rwts: total=1569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.683 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:53.683 00:34:53.683 Run status group 0 (all jobs): 00:34:53.683 READ: bw=119MiB/s (124MB/s), 37.7MiB/s-42.3MiB/s (39.5MB/s-44.3MB/s), io=598MiB (627MB), run=5003-5046msec 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 bdev_null0 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 [2024-11-26 07:44:21.128629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 bdev_null1 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 bdev_null2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:53.683 { 00:34:53.683 "params": { 00:34:53.683 "name": "Nvme$subsystem", 00:34:53.683 "trtype": "$TEST_TRANSPORT", 00:34:53.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.683 "adrfam": "ipv4", 00:34:53.683 "trsvcid": "$NVMF_PORT", 00:34:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.683 "hdgst": ${hdgst:-false}, 00:34:53.683 "ddgst": ${ddgst:-false} 00:34:53.683 }, 00:34:53.683 "method": "bdev_nvme_attach_controller" 00:34:53.683 } 00:34:53.683 EOF 00:34:53.683 )") 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.683 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:53.684 { 00:34:53.684 "params": { 00:34:53.684 "name": "Nvme$subsystem", 00:34:53.684 "trtype": "$TEST_TRANSPORT", 00:34:53.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.684 "adrfam": "ipv4", 00:34:53.684 "trsvcid": "$NVMF_PORT", 00:34:53.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.684 "hdgst": ${hdgst:-false}, 00:34:53.684 "ddgst": ${ddgst:-false} 00:34:53.684 }, 00:34:53.684 "method": "bdev_nvme_attach_controller" 00:34:53.684 } 00:34:53.684 EOF 00:34:53.684 )") 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:53.684 { 00:34:53.684 "params": { 00:34:53.684 "name": "Nvme$subsystem", 00:34:53.684 "trtype": "$TEST_TRANSPORT", 00:34:53.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.684 "adrfam": "ipv4", 00:34:53.684 "trsvcid": "$NVMF_PORT", 00:34:53.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.684 "hdgst": ${hdgst:-false}, 00:34:53.684 "ddgst": ${ddgst:-false} 00:34:53.684 }, 00:34:53.684 "method": "bdev_nvme_attach_controller" 00:34:53.684 } 00:34:53.684 EOF 00:34:53.684 )") 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:53.684 "params": { 00:34:53.684 "name": "Nvme0", 00:34:53.684 "trtype": "tcp", 00:34:53.684 "traddr": "10.0.0.2", 00:34:53.684 "adrfam": "ipv4", 00:34:53.684 "trsvcid": "4420", 00:34:53.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:53.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:53.684 "hdgst": false, 00:34:53.684 "ddgst": false 00:34:53.684 }, 00:34:53.684 "method": "bdev_nvme_attach_controller" 00:34:53.684 },{ 00:34:53.684 "params": { 00:34:53.684 "name": "Nvme1", 00:34:53.684 "trtype": "tcp", 00:34:53.684 "traddr": "10.0.0.2", 00:34:53.684 "adrfam": "ipv4", 00:34:53.684 "trsvcid": "4420", 00:34:53.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:53.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:53.684 "hdgst": false, 00:34:53.684 "ddgst": false 00:34:53.684 }, 00:34:53.684 "method": "bdev_nvme_attach_controller" 00:34:53.684 },{ 00:34:53.684 "params": { 00:34:53.684 "name": "Nvme2", 00:34:53.684 "trtype": "tcp", 00:34:53.684 "traddr": "10.0.0.2", 00:34:53.684 "adrfam": "ipv4", 00:34:53.684 "trsvcid": "4420", 00:34:53.684 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:53.684 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:53.684 "hdgst": false, 00:34:53.684 "ddgst": false 00:34:53.684 }, 00:34:53.684 "method": "bdev_nvme_attach_controller" 00:34:53.684 }' 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:53.684 07:44:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.684 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:53.684 ... 00:34:53.684 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:53.684 ... 00:34:53.684 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:53.684 ... 00:34:53.684 fio-3.35 00:34:53.684 Starting 24 threads 00:35:05.902 00:35:05.902 filename0: (groupid=0, jobs=1): err= 0: pid=994661: Tue Nov 26 07:44:32 2024 00:35:05.902 read: IOPS=576, BW=2305KiB/s (2361kB/s)(22.6MiB/10022msec) 00:35:05.902 slat (nsec): min=4167, max=51463, avg=16736.42, stdev=7513.07 00:35:05.902 clat (usec): min=2281, max=41712, avg=27625.26, stdev=2791.63 00:35:05.902 lat (usec): min=2290, max=41736, avg=27642.00, stdev=2792.15 00:35:05.902 clat percentiles (usec): 00:35:05.902 | 1.00th=[10552], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.902 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.902 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:05.902 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:35:05.902 | 99.99th=[41681] 00:35:05.902 bw ( KiB/s): min= 2176, max= 2816, per=4.20%, avg=2304.00, stdev=131.33, samples=20 00:35:05.902 iops : min= 544, max= 704, avg=576.00, stdev=32.83, samples=20 00:35:05.902 lat (msec) : 4=0.83%, 20=1.14%, 50=98.03% 00:35:05.902 cpu : usr=98.19%, sys=1.45%, ctx=37, majf=0, minf=11 00:35:05.902 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:05.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.902 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.902 issued rwts: total=5776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.902 filename0: (groupid=0, jobs=1): err= 0: pid=994662: Tue Nov 26 07:44:32 2024 00:35:05.902 read: IOPS=577, BW=2309KiB/s (2364kB/s)(22.6MiB/10007msec) 00:35:05.902 slat (nsec): min=3272, max=50197, avg=11412.58, stdev=4326.51 00:35:05.902 clat (usec): min=2291, max=29001, avg=27618.46, stdev=2842.60 00:35:05.902 lat (usec): min=2306, max=29014, avg=27629.87, stdev=2842.70 00:35:05.902 clat percentiles (usec): 00:35:05.902 | 1.00th=[10552], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:35:05.902 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.902 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:05.902 | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:35:05.902 | 99.99th=[28967] 00:35:05.902 bw ( KiB/s): min= 2176, max= 2821, per=4.20%, avg=2304.25, stdev=132.35, samples=20 00:35:05.902 iops : min= 544, max= 705, avg=576.05, stdev=33.04, samples=20 00:35:05.902 lat (msec) : 4=0.83%, 20=1.39%, 50=97.78% 00:35:05.902 cpu : usr=98.49%, sys=1.16%, ctx=56, majf=0, minf=9 00:35:05.902 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.902 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.902 issued rwts: total=5776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.902 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.902 filename0: (groupid=0, jobs=1): err= 0: pid=994663: Tue Nov 26 07:44:32 2024 00:35:05.902 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10005msec) 00:35:05.902 slat (nsec): min=7637, max=45162, avg=20645.05, stdev=6781.61 00:35:05.902 clat (usec): min=11169, max=29624, avg=27850.80, stdev=1265.80 00:35:05.902 lat (usec): min=11196, max=29644, avg=27871.45, stdev=1265.23 00:35:05.903 clat percentiles (usec): 00:35:05.903 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.903 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.903 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:05.903 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:35:05.903 | 99.99th=[29754] 00:35:05.903 bw ( KiB/s): min= 2176, max= 2432, per=4.16%, avg=2278.40, stdev=66.96, samples=20 00:35:05.903 iops : min= 544, max= 608, avg=569.60, stdev=16.74, samples=20 00:35:05.903 lat (msec) : 20=0.84%, 50=99.16% 00:35:05.903 cpu : usr=98.26%, sys=1.38%, ctx=13, majf=0, minf=9 00:35:05.903 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.903 filename0: (groupid=0, jobs=1): err= 0: pid=994664: Tue Nov 26 07:44:32 2024 00:35:05.903 read: IOPS=569, BW=2276KiB/s (2331kB/s)(22.2MiB/10009msec) 00:35:05.903 slat (nsec): min=4797, max=57076, avg=28490.52, stdev=8733.45 00:35:05.903 clat (usec): min=14881, max=40297, avg=27856.52, stdev=1077.08 00:35:05.903 lat (usec): min=14899, max=40310, avg=27885.01, stdev=1076.78 00:35:05.903 clat percentiles (usec): 00:35:05.903 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.903 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:35:05.903 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:05.903 | 99.00th=[28967], 99.50th=[28967], 99.90th=[40109], 99.95th=[40109], 00:35:05.903 | 99.99th=[40109] 00:35:05.903 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2271.05, stdev=56.07, samples=20 00:35:05.903 iops : min= 544, max= 576, avg=567.75, stdev=14.01, samples=20 00:35:05.903 lat (msec) : 20=0.28%, 50=99.72% 00:35:05.903 cpu : usr=98.34%, sys=1.32%, ctx=11, majf=0, minf=9 00:35:05.903 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.903 filename0: (groupid=0, jobs=1): err= 0: pid=994665: Tue Nov 26 07:44:32 2024 00:35:05.903 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10007msec) 00:35:05.903 slat (usec): min=7, max=142, avg=55.10, stdev= 6.20 00:35:05.903 clat (usec): min=13868, max=44241, avg=27618.31, stdev=836.34 00:35:05.903 lat (usec): min=13904, max=44296, avg=27673.41, stdev=837.50 00:35:05.903 clat percentiles (usec): 00:35:05.903 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:35:05.903 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:35:05.903 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:35:05.903 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29492], 99.95th=[40633], 00:35:05.903 | 99.99th=[44303] 00:35:05.903 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2272.00, stdev=56.87, samples=20 00:35:05.903 iops : min= 544, max= 576, avg=568.00, stdev=14.22, samples=20 00:35:05.903 lat (msec) : 20=0.35%, 50=99.65% 00:35:05.903 cpu : usr=98.67%, sys=0.94%, ctx=12, majf=0, minf=9 00:35:05.903 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.903 filename0: (groupid=0, jobs=1): err= 0: pid=994666: Tue Nov 26 07:44:32 2024 00:35:05.903 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10001msec) 00:35:05.903 slat (nsec): min=6917, max=56248, avg=16327.93, stdev=9886.30 00:35:05.903 clat (usec): min=15582, max=33247, avg=27899.41, stdev=1065.24 00:35:05.903 lat (usec): min=15603, max=33262, avg=27915.74, stdev=1064.71 00:35:05.903 clat percentiles (usec): 00:35:05.903 | 1.00th=[22152], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.903 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.903 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:05.903 | 99.00th=[28967], 99.50th=[29230], 99.90th=[30802], 99.95th=[33162], 00:35:05.903 | 99.99th=[33162] 00:35:05.903 bw ( KiB/s): min= 2176, max= 2408, per=4.16%, avg=2282.53, stdev=61.28, samples=19 00:35:05.903 iops : min= 544, max= 602, avg=570.63, stdev=15.32, samples=19 00:35:05.903 lat (msec) : 20=0.61%, 50=99.39% 00:35:05.903 cpu : usr=98.38%, sys=1.29%, ctx=15, majf=0, minf=9 00:35:05.903 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:05.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 issued rwts: total=5709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.903 filename0: (groupid=0, jobs=1): err= 0: pid=994667: Tue Nov 26 07:44:32 2024 00:35:05.903 read: IOPS=568, BW=2276KiB/s (2330kB/s)(22.2MiB/10012msec) 00:35:05.903 slat (nsec): min=5878, max=56877, avg=27809.65, stdev=8543.06 00:35:05.903 clat (usec): min=14784, max=42532, avg=27886.77, stdev=1166.40 00:35:05.903 lat (usec): min=14803, max=42553, avg=27914.58, stdev=1165.84 00:35:05.903 clat percentiles (usec): 00:35:05.903 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.903 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:35:05.903 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:05.903 | 99.00th=[28967], 99.50th=[29230], 99.90th=[42206], 99.95th=[42730], 00:35:05.903 | 99.99th=[42730] 00:35:05.903 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.40, stdev=54.67, samples=20 00:35:05.903 iops : min= 544, max= 576, avg=567.60, stdev=13.67, samples=20 00:35:05.903 lat (msec) : 20=0.28%, 50=99.72% 00:35:05.903 cpu : usr=98.40%, sys=1.26%, ctx=14, majf=0, minf=9 00:35:05.903 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.903 filename0: (groupid=0, jobs=1): err= 0: pid=994669: Tue Nov 26 07:44:32 2024 00:35:05.903 read: IOPS=571, BW=2285KiB/s (2340kB/s)(22.3MiB/10004msec) 00:35:05.903 slat (nsec): min=3886, max=57140, avg=26166.37, stdev=9554.27 00:35:05.903 clat (usec): min=16155, max=45604, avg=27756.29, stdev=1479.79 00:35:05.903 lat (usec): min=16163, max=45619, avg=27782.45, stdev=1481.27 00:35:05.903 clat percentiles (usec): 00:35:05.903 | 1.00th=[20579], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:05.903 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:35:05.903 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:05.903 | 99.00th=[28967], 99.50th=[32637], 99.90th=[45351], 99.95th=[45351], 00:35:05.903 | 99.99th=[45351] 00:35:05.903 bw ( KiB/s): min= 2176, max= 2336, per=4.16%, avg=2280.00, stdev=53.82, samples=20 00:35:05.903 iops : min= 544, max= 584, avg=570.00, stdev=13.46, samples=20 00:35:05.903 lat (msec) : 20=0.28%, 50=99.72% 00:35:05.903 cpu : usr=98.47%, sys=1.20%, ctx=13, majf=0, minf=9 00:35:05.903 IO depths : 1=5.8%, 2=11.7%, 4=23.7%, 8=51.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:05.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 issued rwts: total=5716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.903 filename1: (groupid=0, jobs=1): err= 0: pid=994670: Tue Nov 26 07:44:32 2024 00:35:05.903 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10005msec) 00:35:05.903 slat (nsec): min=8015, max=43551, avg=20495.27, stdev=6311.19 00:35:05.903 clat (usec): min=11048, max=29769, avg=27848.72, stdev=1262.63 00:35:05.903 lat (usec): min=11071, max=29782, avg=27869.21, stdev=1262.35 00:35:05.903 clat percentiles (usec): 00:35:05.903 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.903 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.903 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:05.903 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29754], 99.95th=[29754], 00:35:05.903 | 99.99th=[29754] 00:35:05.903 bw ( KiB/s): min= 2176, max= 2432, per=4.16%, avg=2278.40, stdev=66.96, samples=20 00:35:05.903 iops : min= 544, max= 608, avg=569.60, stdev=16.74, samples=20 00:35:05.903 lat (msec) : 20=0.84%, 50=99.16% 00:35:05.903 cpu : usr=98.43%, sys=1.24%, ctx=15, majf=0, minf=9 00:35:05.903 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.903 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.903 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.903 filename1: (groupid=0, jobs=1): err= 0: pid=994671: Tue Nov 26 07:44:32 2024 00:35:05.903 read: IOPS=569, BW=2276KiB/s (2331kB/s)(22.2MiB/10010msec) 00:35:05.903 slat (nsec): min=4386, max=60014, avg=27877.93, stdev=8476.22 00:35:05.903 clat (usec): min=14798, max=41313, avg=27861.42, stdev=1112.37 00:35:05.903 lat (usec): min=14814, max=41327, avg=27889.30, stdev=1112.11 00:35:05.903 clat percentiles (usec): 00:35:05.903 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.903 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:35:05.903 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:05.903 | 99.00th=[28967], 99.50th=[28967], 99.90th=[41157], 99.95th=[41157], 00:35:05.903 | 99.99th=[41157] 00:35:05.903 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.85, stdev=56.42, samples=20 00:35:05.903 iops : min= 544, max= 576, avg=567.70, stdev=14.10, samples=20 00:35:05.903 lat (msec) : 20=0.28%, 50=99.72% 00:35:05.903 cpu : usr=98.42%, sys=1.25%, ctx=13, majf=0, minf=9 00:35:05.903 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.904 filename1: (groupid=0, jobs=1): err= 0: pid=994672: Tue Nov 26 07:44:32 2024 00:35:05.904 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10005msec) 00:35:05.904 slat (nsec): min=8808, max=43779, avg=21601.63, stdev=6164.27 00:35:05.904 clat (usec): min=11185, max=29641, avg=27830.65, stdev=1264.79 00:35:05.904 lat (usec): min=11208, max=29671, avg=27852.25, stdev=1264.84 00:35:05.904 clat percentiles (usec): 00:35:05.904 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.904 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.904 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:05.904 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:35:05.904 | 99.99th=[29754] 00:35:05.904 bw ( KiB/s): min= 2176, max= 2432, per=4.16%, avg=2278.40, stdev=66.96, samples=20 00:35:05.904 iops : min= 544, max= 608, avg=569.60, stdev=16.74, samples=20 00:35:05.904 lat (msec) : 20=0.84%, 50=99.16% 00:35:05.904 cpu : usr=98.43%, sys=1.24%, ctx=13, majf=0, minf=9 00:35:05.904 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.904 filename1: (groupid=0, jobs=1): err= 0: pid=994673: Tue Nov 26 07:44:32 2024 00:35:05.904 read: IOPS=568, BW=2276KiB/s (2330kB/s)(22.2MiB/10012msec) 00:35:05.904 slat (nsec): min=6862, max=57665, avg=27522.11, stdev=8842.51 00:35:05.904 clat (usec): min=14842, max=42609, avg=27894.18, stdev=1156.66 00:35:05.904 lat (usec): min=14858, max=42627, avg=27921.70, stdev=1155.89 00:35:05.904 clat percentiles (usec): 00:35:05.904 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.904 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.904 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:05.904 | 99.00th=[28967], 99.50th=[28967], 99.90th=[42730], 99.95th=[42730], 00:35:05.904 | 99.99th=[42730] 00:35:05.904 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.40, stdev=56.37, samples=20 00:35:05.904 iops : min= 544, max= 576, avg=567.60, stdev=14.09, samples=20 00:35:05.904 lat (msec) : 20=0.28%, 50=99.72% 00:35:05.904 cpu : usr=98.21%, sys=1.45%, ctx=14, majf=0, minf=9 00:35:05.904 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.904 filename1: (groupid=0, jobs=1): err= 0: pid=994674: Tue Nov 26 07:44:32 2024 00:35:05.904 read: IOPS=568, BW=2276KiB/s (2330kB/s)(22.2MiB/10012msec) 00:35:05.904 slat (nsec): min=8550, max=55414, avg=27975.64, stdev=8246.06 00:35:05.904 clat (usec): min=14733, max=42567, avg=27879.23, stdev=1154.30 00:35:05.904 lat (usec): min=14761, max=42583, avg=27907.21, stdev=1153.77 00:35:05.904 clat percentiles (usec): 00:35:05.904 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.904 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:35:05.904 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:05.904 | 99.00th=[28967], 99.50th=[28967], 99.90th=[42730], 99.95th=[42730], 00:35:05.904 | 99.99th=[42730] 00:35:05.904 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.40, stdev=56.37, samples=20 00:35:05.904 iops : min= 544, max= 576, avg=567.60, stdev=14.09, samples=20 00:35:05.904 lat (msec) : 20=0.28%, 50=99.72% 00:35:05.904 cpu : usr=98.35%, sys=1.31%, ctx=14, majf=0, minf=9 00:35:05.904 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.904 filename1: (groupid=0, jobs=1): err= 0: pid=994675: Tue Nov 26 07:44:32 2024 00:35:05.904 read: IOPS=571, BW=2286KiB/s (2341kB/s)(22.4MiB/10015msec) 00:35:05.904 slat (nsec): min=6988, max=57732, avg=13776.92, stdev=7165.11 00:35:05.904 clat (usec): min=10836, max=33104, avg=27868.99, stdev=1312.54 00:35:05.904 lat (usec): min=10845, max=33112, avg=27882.77, stdev=1312.64 00:35:05.904 clat percentiles (usec): 00:35:05.904 | 1.00th=[22414], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:35:05.904 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.904 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:05.904 | 99.00th=[28705], 99.50th=[30278], 99.90th=[31851], 99.95th=[33162], 00:35:05.904 | 99.99th=[33162] 00:35:05.904 bw ( KiB/s): min= 2176, max= 2400, per=4.17%, avg=2283.40, stdev=59.06, samples=20 00:35:05.904 iops : min= 544, max= 600, avg=570.85, stdev=14.77, samples=20 00:35:05.904 lat (msec) : 20=0.66%, 50=99.34% 00:35:05.904 cpu : usr=98.52%, sys=1.14%, ctx=14, majf=0, minf=9 00:35:05.904 IO depths : 1=6.0%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:05.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 issued rwts: total=5724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.904 filename1: (groupid=0, jobs=1): err= 0: pid=994676: Tue Nov 26 07:44:32 2024 00:35:05.904 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10008msec) 00:35:05.904 slat (nsec): min=5625, max=55631, avg=26150.78, stdev=9620.29 00:35:05.904 clat (usec): min=14794, max=50498, avg=27867.81, stdev=1269.22 00:35:05.904 lat (usec): min=14807, max=50514, avg=27893.96, stdev=1269.35 00:35:05.904 clat percentiles (usec): 00:35:05.904 | 1.00th=[23987], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.904 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:35:05.904 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:05.904 | 99.00th=[28967], 99.50th=[31851], 99.90th=[39584], 99.95th=[50594], 00:35:05.904 | 99.99th=[50594] 00:35:05.904 bw ( KiB/s): min= 2140, max= 2304, per=4.14%, avg=2271.05, stdev=57.40, samples=20 00:35:05.904 iops : min= 535, max= 576, avg=567.75, stdev=14.35, samples=20 00:35:05.904 lat (msec) : 20=0.28%, 50=99.63%, 100=0.09% 00:35:05.904 cpu : usr=98.44%, sys=1.23%, ctx=14, majf=0, minf=9 00:35:05.904 IO depths : 1=5.5%, 2=11.1%, 4=23.3%, 8=52.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:35:05.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.904 filename1: (groupid=0, jobs=1): err= 0: pid=994677: Tue Nov 26 07:44:32 2024 00:35:05.904 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10008msec) 00:35:05.904 slat (nsec): min=4658, max=44222, avg=18870.86, stdev=7062.91 00:35:05.904 clat (usec): min=13392, max=43985, avg=27924.40, stdev=1291.56 00:35:05.904 lat (usec): min=13400, max=43998, avg=27943.27, stdev=1291.96 00:35:05.904 clat percentiles (usec): 00:35:05.904 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.904 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.904 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:05.904 | 99.00th=[29230], 99.50th=[29492], 99.90th=[43779], 99.95th=[43779], 00:35:05.904 | 99.99th=[43779] 00:35:05.904 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2271.50, stdev=56.61, samples=20 00:35:05.904 iops : min= 544, max= 576, avg=567.85, stdev=14.14, samples=20 00:35:05.904 lat (msec) : 20=0.56%, 50=99.44% 00:35:05.904 cpu : usr=98.32%, sys=1.33%, ctx=14, majf=0, minf=9 00:35:05.904 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.904 filename2: (groupid=0, jobs=1): err= 0: pid=994679: Tue Nov 26 07:44:32 2024 00:35:05.904 read: IOPS=592, BW=2371KiB/s (2427kB/s)(23.2MiB/10006msec) 00:35:05.904 slat (nsec): min=6936, max=57346, avg=13203.48, stdev=8222.18 00:35:05.904 clat (usec): min=8454, max=64752, avg=26941.86, stdev=4255.51 00:35:05.904 lat (usec): min=8461, max=64770, avg=26955.07, stdev=4255.15 00:35:05.904 clat percentiles (usec): 00:35:05.904 | 1.00th=[16909], 5.00th=[20317], 10.00th=[20841], 20.00th=[22676], 00:35:05.904 | 30.00th=[26346], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.904 | 70.00th=[28181], 80.00th=[28181], 90.00th=[30540], 95.00th=[33817], 00:35:05.904 | 99.00th=[35390], 99.50th=[40109], 99.90th=[64750], 99.95th=[64750], 00:35:05.904 | 99.99th=[64750] 00:35:05.904 bw ( KiB/s): min= 2240, max= 2512, per=4.32%, avg=2368.00, stdev=88.10, samples=20 00:35:05.904 iops : min= 560, max= 628, avg=592.00, stdev=22.02, samples=20 00:35:05.904 lat (msec) : 10=0.17%, 20=2.39%, 50=97.17%, 100=0.27% 00:35:05.904 cpu : usr=98.56%, sys=1.09%, ctx=13, majf=0, minf=9 00:35:05.904 IO depths : 1=0.1%, 2=0.1%, 4=2.6%, 8=81.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:35:05.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 complete : 0=0.0%, 4=88.9%, 8=9.0%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.904 issued rwts: total=5930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.904 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.904 filename2: (groupid=0, jobs=1): err= 0: pid=994680: Tue Nov 26 07:44:32 2024 00:35:05.904 read: IOPS=568, BW=2276KiB/s (2331kB/s)(22.2MiB/10011msec) 00:35:05.905 slat (nsec): min=5845, max=61791, avg=26678.25, stdev=9560.40 00:35:05.905 clat (usec): min=15037, max=42269, avg=27907.78, stdev=1138.55 00:35:05.905 lat (usec): min=15066, max=42286, avg=27934.46, stdev=1137.65 00:35:05.905 clat percentiles (usec): 00:35:05.905 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.905 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.905 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:05.905 | 99.00th=[28967], 99.50th=[28967], 99.90th=[42206], 99.95th=[42206], 00:35:05.905 | 99.99th=[42206] 00:35:05.905 bw ( KiB/s): min= 2176, max= 2304, per=4.14%, avg=2270.60, stdev=56.02, samples=20 00:35:05.905 iops : min= 544, max= 576, avg=567.65, stdev=14.00, samples=20 00:35:05.905 lat (msec) : 20=0.28%, 50=99.72% 00:35:05.905 cpu : usr=98.24%, sys=1.42%, ctx=17, majf=0, minf=9 00:35:05.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.905 filename2: (groupid=0, jobs=1): err= 0: pid=994681: Tue Nov 26 07:44:32 2024 00:35:05.905 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.3MiB/10005msec) 00:35:05.905 slat (nsec): min=5759, max=55918, avg=25250.18, stdev=10516.47 00:35:05.905 clat (usec): min=14788, max=53142, avg=27854.01, stdev=2136.30 00:35:05.905 lat (usec): min=14802, max=53158, avg=27879.26, stdev=2136.30 00:35:05.905 clat percentiles (usec): 00:35:05.905 | 1.00th=[20579], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:05.905 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:35:05.905 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:05.905 | 99.00th=[34866], 99.50th=[38536], 99.90th=[53216], 99.95th=[53216], 00:35:05.905 | 99.99th=[53216] 00:35:05.905 bw ( KiB/s): min= 2064, max= 2320, per=4.15%, avg=2272.80, stdev=67.98, samples=20 00:35:05.905 iops : min= 516, max= 580, avg=568.20, stdev=16.99, samples=20 00:35:05.905 lat (msec) : 20=0.49%, 50=99.23%, 100=0.28% 00:35:05.905 cpu : usr=98.41%, sys=1.26%, ctx=10, majf=0, minf=9 00:35:05.905 IO depths : 1=5.5%, 2=11.1%, 4=22.5%, 8=53.6%, 16=7.4%, 32=0.0%, >=64=0.0% 00:35:05.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 complete : 0=0.0%, 4=93.5%, 8=1.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 issued rwts: total=5698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.905 filename2: (groupid=0, jobs=1): err= 0: pid=994682: Tue Nov 26 07:44:32 2024 00:35:05.905 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10005msec) 00:35:05.905 slat (nsec): min=8782, max=45133, avg=21324.56, stdev=6132.25 00:35:05.905 clat (usec): min=8069, max=29638, avg=27830.89, stdev=1270.88 00:35:05.905 lat (usec): min=8080, max=29659, avg=27852.22, stdev=1271.19 00:35:05.905 clat percentiles (usec): 00:35:05.905 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.905 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.905 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:05.905 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:35:05.905 | 99.99th=[29754] 00:35:05.905 bw ( KiB/s): min= 2176, max= 2432, per=4.16%, avg=2278.40, stdev=66.96, samples=20 00:35:05.905 iops : min= 544, max= 608, avg=569.60, stdev=16.74, samples=20 00:35:05.905 lat (msec) : 10=0.04%, 20=0.77%, 50=99.19% 00:35:05.905 cpu : usr=98.37%, sys=1.30%, ctx=12, majf=0, minf=9 00:35:05.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.905 filename2: (groupid=0, jobs=1): err= 0: pid=994683: Tue Nov 26 07:44:32 2024 00:35:05.905 read: IOPS=570, BW=2280KiB/s (2335kB/s)(22.3MiB/10020msec) 00:35:05.905 slat (nsec): min=7406, max=59400, avg=24396.87, stdev=9425.23 00:35:05.905 clat (usec): min=14584, max=32846, avg=27876.62, stdev=914.60 00:35:05.905 lat (usec): min=14592, max=32861, avg=27901.02, stdev=914.51 00:35:05.905 clat percentiles (usec): 00:35:05.905 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.905 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.905 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:05.905 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[31327], 00:35:05.905 | 99.99th=[32900] 00:35:05.905 bw ( KiB/s): min= 2176, max= 2308, per=4.16%, avg=2278.60, stdev=52.64, samples=20 00:35:05.905 iops : min= 544, max= 577, avg=569.65, stdev=13.16, samples=20 00:35:05.905 lat (msec) : 20=0.28%, 50=99.72% 00:35:05.905 cpu : usr=98.22%, sys=1.45%, ctx=14, majf=0, minf=9 00:35:05.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.905 filename2: (groupid=0, jobs=1): err= 0: pid=994684: Tue Nov 26 07:44:32 2024 00:35:05.905 read: IOPS=576, BW=2304KiB/s (2360kB/s)(22.5MiB/10006msec) 00:35:05.905 slat (nsec): min=4615, max=52181, avg=13237.61, stdev=8327.89 00:35:05.905 clat (usec): min=13680, max=53989, avg=27712.81, stdev=3297.04 00:35:05.905 lat (usec): min=13687, max=54002, avg=27726.04, stdev=3295.93 00:35:05.905 clat percentiles (usec): 00:35:05.905 | 1.00th=[19792], 5.00th=[21627], 10.00th=[22938], 20.00th=[27395], 00:35:05.905 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.905 | 70.00th=[28181], 80.00th=[28443], 90.00th=[30540], 95.00th=[33424], 00:35:05.905 | 99.00th=[34866], 99.50th=[35914], 99.90th=[53740], 99.95th=[53740], 00:35:05.905 | 99.99th=[53740] 00:35:05.905 bw ( KiB/s): min= 2052, max= 2416, per=4.20%, avg=2301.30, stdev=74.20, samples=20 00:35:05.905 iops : min= 513, max= 604, avg=575.30, stdev=18.52, samples=20 00:35:05.905 lat (msec) : 20=1.21%, 50=98.51%, 100=0.28% 00:35:05.905 cpu : usr=98.42%, sys=1.24%, ctx=14, majf=0, minf=9 00:35:05.905 IO depths : 1=1.0%, 2=2.2%, 4=6.0%, 8=76.0%, 16=14.8%, 32=0.0%, >=64=0.0% 00:35:05.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 complete : 0=0.0%, 4=89.7%, 8=7.9%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 issued rwts: total=5764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.905 filename2: (groupid=0, jobs=1): err= 0: pid=994685: Tue Nov 26 07:44:32 2024 00:35:05.905 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10005msec) 00:35:05.905 slat (nsec): min=12285, max=57775, avg=21518.66, stdev=6116.73 00:35:05.905 clat (usec): min=8045, max=29674, avg=27832.19, stdev=1274.15 00:35:05.905 lat (usec): min=8084, max=29691, avg=27853.70, stdev=1273.84 00:35:05.905 clat percentiles (usec): 00:35:05.905 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:05.905 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:05.905 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:05.905 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29492], 99.95th=[29754], 00:35:05.905 | 99.99th=[29754] 00:35:05.905 bw ( KiB/s): min= 2176, max= 2432, per=4.16%, avg=2278.40, stdev=66.96, samples=20 00:35:05.905 iops : min= 544, max= 608, avg=569.60, stdev=16.74, samples=20 00:35:05.905 lat (msec) : 10=0.04%, 20=0.77%, 50=99.19% 00:35:05.905 cpu : usr=98.34%, sys=1.31%, ctx=10, majf=0, minf=9 00:35:05.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:05.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.905 filename2: (groupid=0, jobs=1): err= 0: pid=994686: Tue Nov 26 07:44:32 2024 00:35:05.905 read: IOPS=567, BW=2272KiB/s (2326kB/s)(22.2MiB/10001msec) 00:35:05.905 slat (nsec): min=24492, max=75623, avg=54923.49, stdev=4676.41 00:35:05.905 clat (usec): min=15806, max=53230, avg=27685.92, stdev=1529.16 00:35:05.905 lat (usec): min=15851, max=53269, avg=27740.85, stdev=1528.28 00:35:05.905 clat percentiles (usec): 00:35:05.905 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:35:05.905 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:35:05.905 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:35:05.905 | 99.00th=[28705], 99.50th=[28967], 99.90th=[53216], 99.95th=[53216], 00:35:05.905 | 99.99th=[53216] 00:35:05.905 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2263.58, stdev=74.55, samples=19 00:35:05.905 iops : min= 512, max= 576, avg=565.89, stdev=18.64, samples=19 00:35:05.905 lat (msec) : 20=0.28%, 50=99.44%, 100=0.28% 00:35:05.905 cpu : usr=98.48%, sys=1.15%, ctx=8, majf=0, minf=9 00:35:05.905 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:05.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.905 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.905 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:05.905 00:35:05.905 Run status group 0 (all jobs): 00:35:05.905 READ: bw=53.5MiB/s (56.1MB/s), 2272KiB/s-2371KiB/s (2326kB/s-2427kB/s), io=536MiB (562MB), run=10001-10022msec 00:35:05.905 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:05.905 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:05.905 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:05.905 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:05.905 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:05.905 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:05.905 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.905 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 bdev_null0 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 [2024-11-26 07:44:32.670548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 bdev_null1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:05.906 { 00:35:05.906 "params": { 00:35:05.906 "name": "Nvme$subsystem", 00:35:05.906 "trtype": "$TEST_TRANSPORT", 00:35:05.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.906 "adrfam": "ipv4", 00:35:05.906 "trsvcid": "$NVMF_PORT", 00:35:05.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.906 "hdgst": ${hdgst:-false}, 00:35:05.906 "ddgst": ${ddgst:-false} 00:35:05.906 }, 00:35:05.906 "method": "bdev_nvme_attach_controller" 00:35:05.906 } 00:35:05.906 EOF 00:35:05.906 )") 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:05.906 07:44:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:05.906 { 00:35:05.906 "params": { 00:35:05.906 "name": "Nvme$subsystem", 00:35:05.906 "trtype": "$TEST_TRANSPORT", 00:35:05.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.906 "adrfam": "ipv4", 00:35:05.906 "trsvcid": "$NVMF_PORT", 00:35:05.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.907 "hdgst": ${hdgst:-false}, 00:35:05.907 "ddgst": ${ddgst:-false} 00:35:05.907 }, 00:35:05.907 "method": "bdev_nvme_attach_controller" 00:35:05.907 } 00:35:05.907 EOF 00:35:05.907 )") 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:05.907 "params": { 00:35:05.907 "name": "Nvme0", 00:35:05.907 "trtype": "tcp", 00:35:05.907 "traddr": "10.0.0.2", 00:35:05.907 "adrfam": "ipv4", 00:35:05.907 "trsvcid": "4420", 00:35:05.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.907 "hdgst": false, 00:35:05.907 "ddgst": false 00:35:05.907 }, 00:35:05.907 "method": "bdev_nvme_attach_controller" 00:35:05.907 },{ 00:35:05.907 "params": { 00:35:05.907 "name": "Nvme1", 00:35:05.907 "trtype": "tcp", 00:35:05.907 "traddr": "10.0.0.2", 00:35:05.907 "adrfam": "ipv4", 00:35:05.907 "trsvcid": "4420", 00:35:05.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:05.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:05.907 "hdgst": false, 00:35:05.907 "ddgst": false 00:35:05.907 }, 00:35:05.907 "method": "bdev_nvme_attach_controller" 00:35:05.907 }' 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:05.907 07:44:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.907 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:05.907 ... 00:35:05.907 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:05.907 ... 00:35:05.907 fio-3.35 00:35:05.907 Starting 4 threads 00:35:11.182 00:35:11.182 filename0: (groupid=0, jobs=1): err= 0: pid=996563: Tue Nov 26 07:44:38 2024 00:35:11.182 read: IOPS=2809, BW=21.9MiB/s (23.0MB/s)(110MiB/5002msec) 00:35:11.182 slat (nsec): min=6130, max=43025, avg=8870.70, stdev=3039.09 00:35:11.182 clat (usec): min=809, max=5691, avg=2821.53, stdev=424.50 00:35:11.182 lat (usec): min=830, max=5703, avg=2830.40, stdev=424.42 00:35:11.182 clat percentiles (usec): 00:35:11.182 | 1.00th=[ 1713], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2507], 00:35:11.182 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2966], 00:35:11.182 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3261], 95.00th=[ 3490], 00:35:11.182 | 99.00th=[ 4080], 99.50th=[ 4293], 99.90th=[ 4817], 99.95th=[ 5145], 00:35:11.182 | 99.99th=[ 5473] 00:35:11.182 bw ( KiB/s): min=21648, max=23184, per=27.09%, avg=22529.78, stdev=545.79, samples=9 00:35:11.182 iops : min= 2706, max= 2898, avg=2816.22, stdev=68.22, samples=9 00:35:11.182 lat (usec) : 1000=0.07% 00:35:11.182 lat (msec) : 2=2.07%, 4=96.52%, 10=1.34% 00:35:11.182 cpu : usr=95.66%, sys=4.02%, ctx=6, majf=0, minf=9 00:35:11.182 IO depths : 1=0.2%, 2=6.2%, 4=64.8%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.182 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.182 issued rwts: total=14051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.182 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:11.182 filename0: (groupid=0, jobs=1): err= 0: pid=996564: Tue Nov 26 07:44:38 2024 00:35:11.182 read: IOPS=2500, BW=19.5MiB/s (20.5MB/s)(98.5MiB/5041msec) 00:35:11.182 slat (nsec): min=6085, max=36338, avg=8912.26, stdev=3089.02 00:35:11.182 clat (usec): min=978, max=41167, avg=3157.89, stdev=757.28 00:35:11.182 lat (usec): min=984, max=41178, avg=3166.80, stdev=757.08 00:35:11.182 clat percentiles (usec): 00:35:11.182 | 1.00th=[ 2180], 5.00th=[ 2540], 10.00th=[ 2704], 20.00th=[ 2868], 00:35:11.182 | 30.00th=[ 2999], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:35:11.182 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3720], 95.00th=[ 4178], 00:35:11.182 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 5407], 00:35:11.182 | 99.99th=[41157] 00:35:11.182 bw ( KiB/s): min=19376, max=20704, per=24.23%, avg=20156.44, stdev=441.58, samples=9 00:35:11.182 iops : min= 2422, max= 2588, avg=2519.56, stdev=55.20, samples=9 00:35:11.182 lat (usec) : 1000=0.02% 00:35:11.182 lat (msec) : 2=0.45%, 4=93.49%, 10=6.02%, 50=0.02% 00:35:11.182 cpu : usr=95.83%, sys=3.87%, ctx=10, majf=0, minf=9 00:35:11.182 IO depths : 1=0.2%, 2=2.8%, 4=68.5%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.182 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.182 issued rwts: total=12606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.182 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:11.182 filename1: (groupid=0, jobs=1): err= 0: pid=996565: Tue Nov 26 07:44:38 2024 00:35:11.182 read: IOPS=2612, BW=20.4MiB/s (21.4MB/s)(102MiB/5002msec) 00:35:11.182 slat (nsec): min=6157, max=44503, avg=9061.78, stdev=3147.11 00:35:11.182 clat (usec): min=628, max=5591, avg=3034.64, stdev=477.73 00:35:11.182 lat (usec): min=639, max=5597, avg=3043.70, stdev=477.56 00:35:11.182 clat percentiles (usec): 00:35:11.182 | 1.00th=[ 1991], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2704], 00:35:11.182 | 30.00th=[ 2835], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:35:11.182 | 70.00th=[ 3130], 80.00th=[ 3294], 90.00th=[ 3589], 95.00th=[ 3916], 00:35:11.182 | 99.00th=[ 4752], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 5473], 00:35:11.182 | 99.99th=[ 5604] 00:35:11.182 bw ( KiB/s): min=20384, max=22000, per=25.04%, avg=20828.44, stdev=504.81, samples=9 00:35:11.182 iops : min= 2548, max= 2750, avg=2603.56, stdev=63.10, samples=9 00:35:11.182 lat (usec) : 750=0.01%, 1000=0.01% 00:35:11.182 lat (msec) : 2=1.04%, 4=94.79%, 10=4.15% 00:35:11.182 cpu : usr=95.32%, sys=4.36%, ctx=10, majf=0, minf=9 00:35:11.182 IO depths : 1=0.1%, 2=4.5%, 4=66.9%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.182 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.182 issued rwts: total=13070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.182 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:11.182 filename1: (groupid=0, jobs=1): err= 0: pid=996566: Tue Nov 26 07:44:38 2024 00:35:11.182 read: IOPS=2536, BW=19.8MiB/s (20.8MB/s)(99.1MiB/5001msec) 00:35:11.182 slat (nsec): min=6174, max=33602, avg=8776.95, stdev=3079.62 00:35:11.182 clat (usec): min=679, max=5961, avg=3129.27, stdev=487.98 00:35:11.182 lat (usec): min=686, max=5973, avg=3138.05, stdev=487.76 00:35:11.182 clat percentiles (usec): 00:35:11.182 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2835], 00:35:11.182 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:35:11.182 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3687], 95.00th=[ 4113], 00:35:11.182 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5342], 99.95th=[ 5473], 00:35:11.182 | 99.99th=[ 5800] 00:35:11.182 bw ( KiB/s): min=19344, max=20896, per=24.39%, avg=20290.78, stdev=478.80, samples=9 00:35:11.182 iops : min= 2418, max= 2612, avg=2536.33, stdev=59.84, samples=9 00:35:11.182 lat (usec) : 750=0.02%, 1000=0.02% 00:35:11.182 lat (msec) : 2=0.46%, 4=93.64%, 10=5.87% 00:35:11.182 cpu : usr=96.10%, sys=3.56%, ctx=10, majf=0, minf=9 00:35:11.182 IO depths : 1=0.1%, 2=2.6%, 4=68.8%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.182 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.183 issued rwts: total=12683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.183 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:11.183 00:35:11.183 Run status group 0 (all jobs): 00:35:11.183 READ: bw=81.2MiB/s (85.2MB/s), 19.5MiB/s-21.9MiB/s (20.5MB/s-23.0MB/s), io=409MiB (429MB), run=5001-5041msec 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.183 00:35:11.183 real 0m24.327s 00:35:11.183 user 4m50.714s 00:35:11.183 sys 0m5.417s 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:11.183 07:44:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:11.183 ************************************ 00:35:11.183 END TEST fio_dif_rand_params 00:35:11.183 ************************************ 00:35:11.183 07:44:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:11.183 07:44:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:11.183 07:44:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:11.183 07:44:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:11.183 ************************************ 00:35:11.183 START TEST fio_dif_digest 00:35:11.183 ************************************ 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:11.183 bdev_null0 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:11.183 [2024-11-26 07:44:39.213990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:11.183 { 00:35:11.183 "params": { 00:35:11.183 "name": "Nvme$subsystem", 00:35:11.183 "trtype": "$TEST_TRANSPORT", 00:35:11.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:11.183 "adrfam": "ipv4", 00:35:11.183 "trsvcid": "$NVMF_PORT", 00:35:11.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:11.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:11.183 "hdgst": ${hdgst:-false}, 00:35:11.183 "ddgst": ${ddgst:-false} 00:35:11.183 }, 00:35:11.183 "method": "bdev_nvme_attach_controller" 00:35:11.183 } 00:35:11.183 EOF 00:35:11.183 )") 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:11.183 "params": { 00:35:11.183 "name": "Nvme0", 00:35:11.183 "trtype": "tcp", 00:35:11.183 "traddr": "10.0.0.2", 00:35:11.183 "adrfam": "ipv4", 00:35:11.183 "trsvcid": "4420", 00:35:11.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.183 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:11.183 "hdgst": true, 00:35:11.183 "ddgst": true 00:35:11.183 }, 00:35:11.183 "method": "bdev_nvme_attach_controller" 00:35:11.183 }' 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:11.183 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:11.466 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:11.466 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:11.466 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:11.466 07:44:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:11.724 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:11.724 ... 00:35:11.724 fio-3.35 00:35:11.724 Starting 3 threads 00:35:24.083 00:35:24.083 filename0: (groupid=0, jobs=1): err= 0: pid=997776: Tue Nov 26 07:44:50 2024 00:35:24.083 read: IOPS=288, BW=36.1MiB/s (37.8MB/s)(362MiB/10046msec) 00:35:24.083 slat (nsec): min=6495, max=27101, avg=11734.79, stdev=1667.78 00:35:24.084 clat (usec): min=7962, max=49051, avg=10367.84, stdev=1219.10 00:35:24.084 lat (usec): min=7974, max=49063, avg=10379.58, stdev=1219.06 00:35:24.084 clat percentiles (usec): 00:35:24.084 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:35:24.084 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:35:24.084 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:35:24.084 | 99.00th=[12125], 99.50th=[12387], 99.90th=[13042], 99.95th=[47449], 00:35:24.084 | 99.99th=[49021] 00:35:24.084 bw ( KiB/s): min=36096, max=38144, per=35.16%, avg=37081.60, stdev=588.92, samples=20 00:35:24.084 iops : min= 282, max= 298, avg=289.70, stdev= 4.60, samples=20 00:35:24.084 lat (msec) : 10=31.18%, 20=68.75%, 50=0.07% 00:35:24.084 cpu : usr=94.18%, sys=5.54%, ctx=18, majf=0, minf=97 00:35:24.084 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.084 issued rwts: total=2899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.084 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:24.084 filename0: (groupid=0, jobs=1): err= 0: pid=997777: Tue Nov 26 07:44:50 2024 00:35:24.084 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(341MiB/10045msec) 00:35:24.084 slat (nsec): min=6540, max=26369, avg=11805.34, stdev=1685.86 00:35:24.084 clat (usec): min=8833, max=51040, avg=11025.31, stdev=1253.47 00:35:24.084 lat (usec): min=8845, max=51052, avg=11037.11, stdev=1253.47 00:35:24.084 clat percentiles (usec): 00:35:24.084 | 1.00th=[ 9372], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:35:24.084 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:35:24.084 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:35:24.084 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13960], 99.95th=[46924], 00:35:24.084 | 99.99th=[51119] 00:35:24.084 bw ( KiB/s): min=33536, max=35584, per=33.06%, avg=34867.20, stdev=509.30, samples=20 00:35:24.084 iops : min= 262, max= 278, avg=272.40, stdev= 3.98, samples=20 00:35:24.084 lat (msec) : 10=7.92%, 20=92.00%, 50=0.04%, 100=0.04% 00:35:24.084 cpu : usr=94.14%, sys=5.56%, ctx=22, majf=0, minf=78 00:35:24.084 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.084 issued rwts: total=2726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.084 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:24.084 filename0: (groupid=0, jobs=1): err= 0: pid=997778: Tue Nov 26 07:44:50 2024 00:35:24.084 read: IOPS=264, BW=33.0MiB/s (34.6MB/s)(332MiB/10044msec) 00:35:24.084 slat (nsec): min=6512, max=27870, avg=11684.75, stdev=1711.57 00:35:24.084 clat (usec): min=8688, max=49515, avg=11332.57, stdev=1226.09 00:35:24.084 lat (usec): min=8701, max=49527, avg=11344.25, stdev=1226.10 00:35:24.084 clat percentiles (usec): 00:35:24.084 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:35:24.084 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:35:24.084 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:35:24.084 | 99.00th=[13173], 99.50th=[13304], 99.90th=[14222], 99.95th=[44827], 00:35:24.084 | 99.99th=[49546] 00:35:24.084 bw ( KiB/s): min=33024, max=34560, per=32.16%, avg=33920.00, stdev=385.12, samples=20 00:35:24.084 iops : min= 258, max= 270, avg=265.00, stdev= 3.01, samples=20 00:35:24.084 lat (msec) : 10=3.66%, 20=96.27%, 50=0.08% 00:35:24.084 cpu : usr=94.46%, sys=5.24%, ctx=20, majf=0, minf=61 00:35:24.084 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:24.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.084 issued rwts: total=2652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.084 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:24.084 00:35:24.084 Run status group 0 (all jobs): 00:35:24.084 READ: bw=103MiB/s (108MB/s), 33.0MiB/s-36.1MiB/s (34.6MB/s-37.8MB/s), io=1035MiB (1085MB), run=10044-10046msec 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.084 00:35:24.084 real 0m11.217s 00:35:24.084 user 0m34.610s 00:35:24.084 sys 0m1.934s 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:24.084 07:44:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.084 ************************************ 00:35:24.084 END TEST fio_dif_digest 00:35:24.084 ************************************ 00:35:24.084 07:44:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:24.084 07:44:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:24.084 rmmod nvme_tcp 00:35:24.084 rmmod nvme_fabrics 00:35:24.084 rmmod nvme_keyring 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 989193 ']' 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 989193 00:35:24.084 07:44:50 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 989193 ']' 00:35:24.084 07:44:50 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 989193 00:35:24.084 07:44:50 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:24.084 07:44:50 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:24.084 07:44:50 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 989193 00:35:24.084 07:44:50 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:24.084 07:44:50 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:24.084 07:44:50 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 989193' 00:35:24.084 killing process with pid 989193 00:35:24.084 07:44:50 nvmf_dif -- common/autotest_common.sh@973 -- # kill 989193 00:35:24.084 07:44:50 nvmf_dif -- common/autotest_common.sh@978 -- # wait 989193 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:24.084 07:44:50 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:25.022 Waiting for block devices as requested 00:35:25.022 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:25.281 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:25.281 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:25.281 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:25.541 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:25.541 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:25.541 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:25.541 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:25.541 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:25.800 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:25.800 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:25.800 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:26.058 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:26.058 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:26.058 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:26.058 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:26.317 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:26.317 07:44:54 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:26.317 07:44:54 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:26.317 07:44:54 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:26.317 07:44:54 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:26.317 07:44:54 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:26.317 07:44:54 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:26.317 07:44:54 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:26.317 07:44:54 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:26.317 07:44:54 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.317 07:44:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:26.317 07:44:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.853 07:44:56 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:28.853 00:35:28.853 real 1m12.735s 00:35:28.853 user 7m6.399s 00:35:28.853 sys 0m20.193s 00:35:28.853 07:44:56 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.853 07:44:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.853 ************************************ 00:35:28.853 END TEST nvmf_dif 00:35:28.853 ************************************ 00:35:28.853 07:44:56 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:28.853 07:44:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:28.853 07:44:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:28.853 07:44:56 -- common/autotest_common.sh@10 -- # set +x 00:35:28.853 ************************************ 00:35:28.853 START TEST nvmf_abort_qd_sizes 00:35:28.853 ************************************ 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:28.853 * Looking for test storage... 00:35:28.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:28.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.853 --rc genhtml_branch_coverage=1 00:35:28.853 --rc genhtml_function_coverage=1 00:35:28.853 --rc genhtml_legend=1 00:35:28.853 --rc geninfo_all_blocks=1 00:35:28.853 --rc geninfo_unexecuted_blocks=1 00:35:28.853 00:35:28.853 ' 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:28.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.853 --rc genhtml_branch_coverage=1 00:35:28.853 --rc genhtml_function_coverage=1 00:35:28.853 --rc genhtml_legend=1 00:35:28.853 --rc geninfo_all_blocks=1 00:35:28.853 --rc geninfo_unexecuted_blocks=1 00:35:28.853 00:35:28.853 ' 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:28.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.853 --rc genhtml_branch_coverage=1 00:35:28.853 --rc genhtml_function_coverage=1 00:35:28.853 --rc genhtml_legend=1 00:35:28.853 --rc geninfo_all_blocks=1 00:35:28.853 --rc geninfo_unexecuted_blocks=1 00:35:28.853 00:35:28.853 ' 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:28.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.853 --rc genhtml_branch_coverage=1 00:35:28.853 --rc genhtml_function_coverage=1 00:35:28.853 --rc genhtml_legend=1 00:35:28.853 --rc geninfo_all_blocks=1 00:35:28.853 --rc geninfo_unexecuted_blocks=1 00:35:28.853 00:35:28.853 ' 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:28.853 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:28.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:28.854 07:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:34.130 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:34.130 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:34.130 Found net devices under 0000:86:00.0: cvl_0_0 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:34.130 Found net devices under 0000:86:00.1: cvl_0_1 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.130 07:45:01 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.130 07:45:02 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:34.130 07:45:02 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.130 07:45:02 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.130 07:45:02 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.131 07:45:02 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:34.131 07:45:02 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:34.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:35:34.131 00:35:34.131 --- 10.0.0.2 ping statistics --- 00:35:34.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.131 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:35:34.131 07:45:02 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:35:34.131 00:35:34.131 --- 10.0.0.1 ping statistics --- 00:35:34.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.131 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:35:34.131 07:45:02 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.131 07:45:02 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:34.131 07:45:02 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:34.131 07:45:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:36.669 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:36.669 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:37.609 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1005627 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1005627 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1005627 ']' 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.609 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:37.868 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.868 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:37.868 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.868 [2024-11-26 07:45:05.748448] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:35:37.868 [2024-11-26 07:45:05.748493] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.868 [2024-11-26 07:45:05.815840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:37.868 [2024-11-26 07:45:05.860189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:37.868 [2024-11-26 07:45:05.860238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:37.868 [2024-11-26 07:45:05.860246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:37.868 [2024-11-26 07:45:05.860251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:37.868 [2024-11-26 07:45:05.860256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:37.868 [2024-11-26 07:45:05.861831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.868 [2024-11-26 07:45:05.861929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:37.868 [2024-11-26 07:45:05.862015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:37.868 [2024-11-26 07:45:05.862017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.868 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:37.868 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:37.868 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:37.868 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:37.868 07:45:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:38.128 07:45:05 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:38.128 07:45:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:38.128 07:45:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:38.128 07:45:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:38.128 07:45:05 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:38.128 07:45:05 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:38.128 07:45:05 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:38.128 07:45:05 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:38.128 07:45:05 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:38.128 07:45:05 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:38.128 07:45:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:38.128 07:45:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:38.128 07:45:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:38.128 07:45:06 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:38.128 07:45:06 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:38.128 07:45:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:38.128 07:45:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:38.128 07:45:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:38.128 07:45:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:38.128 07:45:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:38.128 07:45:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:38.128 ************************************ 00:35:38.128 START TEST spdk_target_abort 00:35:38.128 ************************************ 00:35:38.128 07:45:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:38.128 07:45:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:38.128 07:45:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:38.128 07:45:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.128 07:45:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.419 spdk_targetn1 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.419 [2024-11-26 07:45:08.873494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.419 [2024-11-26 07:45:08.920092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:41.419 07:45:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:43.989 Initializing NVMe Controllers 00:35:43.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:43.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:43.989 Initialization complete. Launching workers. 00:35:43.989 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15739, failed: 0 00:35:43.989 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1380, failed to submit 14359 00:35:43.989 success 723, unsuccessful 657, failed 0 00:35:43.989 07:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:43.989 07:45:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:48.180 Initializing NVMe Controllers 00:35:48.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:48.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:48.180 Initialization complete. Launching workers. 00:35:48.180 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8531, failed: 0 00:35:48.180 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1262, failed to submit 7269 00:35:48.180 success 293, unsuccessful 969, failed 0 00:35:48.180 07:45:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:48.180 07:45:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:50.716 Initializing NVMe Controllers 00:35:50.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:50.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:50.716 Initialization complete. Launching workers. 00:35:50.716 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37842, failed: 0 00:35:50.716 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2877, failed to submit 34965 00:35:50.717 success 570, unsuccessful 2307, failed 0 00:35:50.717 07:45:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:50.717 07:45:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.717 07:45:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:50.717 07:45:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.717 07:45:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:50.717 07:45:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.717 07:45:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1005627 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1005627 ']' 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1005627 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1005627 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1005627' 00:35:52.095 killing process with pid 1005627 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1005627 00:35:52.095 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1005627 00:35:52.354 00:35:52.354 real 0m14.180s 00:35:52.354 user 0m53.963s 00:35:52.354 sys 0m2.649s 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:52.354 ************************************ 00:35:52.354 END TEST spdk_target_abort 00:35:52.354 ************************************ 00:35:52.354 07:45:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:52.354 07:45:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:52.354 07:45:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.354 07:45:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:52.354 ************************************ 00:35:52.354 START TEST kernel_target_abort 00:35:52.354 ************************************ 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:52.354 07:45:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:54.893 Waiting for block devices as requested 00:35:54.893 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:55.153 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:55.153 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:55.153 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:55.153 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:55.413 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:55.413 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:55.413 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:55.413 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:55.672 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:55.672 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:55.672 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:55.672 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:55.932 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:55.932 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:55.932 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:56.192 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:56.192 No valid GPT data, bailing 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:56.192 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:56.452 00:35:56.452 Discovery Log Number of Records 2, Generation counter 2 00:35:56.452 =====Discovery Log Entry 0====== 00:35:56.452 trtype: tcp 00:35:56.452 adrfam: ipv4 00:35:56.452 subtype: current discovery subsystem 00:35:56.452 treq: not specified, sq flow control disable supported 00:35:56.452 portid: 1 00:35:56.452 trsvcid: 4420 00:35:56.452 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:56.452 traddr: 10.0.0.1 00:35:56.452 eflags: none 00:35:56.452 sectype: none 00:35:56.452 =====Discovery Log Entry 1====== 00:35:56.452 trtype: tcp 00:35:56.452 adrfam: ipv4 00:35:56.452 subtype: nvme subsystem 00:35:56.452 treq: not specified, sq flow control disable supported 00:35:56.452 portid: 1 00:35:56.452 trsvcid: 4420 00:35:56.452 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:56.452 traddr: 10.0.0.1 00:35:56.452 eflags: none 00:35:56.452 sectype: none 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:56.452 07:45:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:59.743 Initializing NVMe Controllers 00:35:59.743 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:59.743 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:59.743 Initialization complete. Launching workers. 00:35:59.743 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92002, failed: 0 00:35:59.743 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92002, failed to submit 0 00:35:59.743 success 0, unsuccessful 92002, failed 0 00:35:59.743 07:45:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:59.743 07:45:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:03.074 Initializing NVMe Controllers 00:36:03.074 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:03.074 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:03.074 Initialization complete. Launching workers. 00:36:03.074 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145698, failed: 0 00:36:03.074 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36554, failed to submit 109144 00:36:03.074 success 0, unsuccessful 36554, failed 0 00:36:03.074 07:45:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:03.074 07:45:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:05.610 Initializing NVMe Controllers 00:36:05.610 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:05.610 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:05.610 Initialization complete. Launching workers. 00:36:05.610 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137825, failed: 0 00:36:05.610 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34522, failed to submit 103303 00:36:05.610 success 0, unsuccessful 34522, failed 0 00:36:05.610 07:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:05.610 07:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:05.610 07:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:05.610 07:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:05.610 07:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:05.610 07:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:05.610 07:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:05.610 07:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:05.610 07:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:05.610 07:45:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:08.141 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:08.141 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:09.078 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:09.078 00:36:09.078 real 0m16.662s 00:36:09.078 user 0m8.682s 00:36:09.078 sys 0m4.548s 00:36:09.078 07:45:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:09.078 07:45:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.078 ************************************ 00:36:09.078 END TEST kernel_target_abort 00:36:09.078 ************************************ 00:36:09.078 07:45:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:09.078 07:45:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:09.078 07:45:36 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:09.078 07:45:36 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:09.078 07:45:36 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:09.078 07:45:36 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:09.078 07:45:36 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:09.078 07:45:36 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:09.078 rmmod nvme_tcp 00:36:09.078 rmmod nvme_fabrics 00:36:09.078 rmmod nvme_keyring 00:36:09.078 07:45:37 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:09.078 07:45:37 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:09.078 07:45:37 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:09.078 07:45:37 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1005627 ']' 00:36:09.078 07:45:37 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1005627 00:36:09.078 07:45:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1005627 ']' 00:36:09.078 07:45:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1005627 00:36:09.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1005627) - No such process 00:36:09.078 07:45:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1005627 is not found' 00:36:09.078 Process with pid 1005627 is not found 00:36:09.078 07:45:37 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:09.078 07:45:37 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:11.615 Waiting for block devices as requested 00:36:11.615 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:11.615 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:11.615 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:11.615 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:11.615 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:11.875 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:11.875 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:11.875 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:11.875 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:12.134 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:12.134 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:12.134 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:12.392 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:12.392 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:12.392 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:12.392 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:12.652 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:12.652 07:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:12.652 07:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:12.652 07:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:12.652 07:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:12.652 07:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:12.652 07:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:12.652 07:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:12.652 07:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:12.652 07:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.652 07:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:12.652 07:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.187 07:45:42 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:15.187 00:36:15.187 real 0m46.300s 00:36:15.187 user 1m6.521s 00:36:15.187 sys 0m15.162s 00:36:15.187 07:45:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.187 07:45:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:15.187 ************************************ 00:36:15.187 END TEST nvmf_abort_qd_sizes 00:36:15.187 ************************************ 00:36:15.187 07:45:42 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:15.187 07:45:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:15.187 07:45:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.187 07:45:42 -- common/autotest_common.sh@10 -- # set +x 00:36:15.187 ************************************ 00:36:15.187 START TEST keyring_file 00:36:15.187 ************************************ 00:36:15.187 07:45:42 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:15.187 * Looking for test storage... 00:36:15.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:15.187 07:45:42 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:15.187 07:45:42 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:36:15.187 07:45:42 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:15.187 07:45:42 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.187 07:45:42 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:15.187 07:45:42 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.187 07:45:42 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:15.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.187 --rc genhtml_branch_coverage=1 00:36:15.187 --rc genhtml_function_coverage=1 00:36:15.187 --rc genhtml_legend=1 00:36:15.187 --rc geninfo_all_blocks=1 00:36:15.187 --rc geninfo_unexecuted_blocks=1 00:36:15.187 00:36:15.187 ' 00:36:15.187 07:45:42 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:15.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.188 --rc genhtml_branch_coverage=1 00:36:15.188 --rc genhtml_function_coverage=1 00:36:15.188 --rc genhtml_legend=1 00:36:15.188 --rc geninfo_all_blocks=1 00:36:15.188 --rc geninfo_unexecuted_blocks=1 00:36:15.188 00:36:15.188 ' 00:36:15.188 07:45:42 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:15.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.188 --rc genhtml_branch_coverage=1 00:36:15.188 --rc genhtml_function_coverage=1 00:36:15.188 --rc genhtml_legend=1 00:36:15.188 --rc geninfo_all_blocks=1 00:36:15.188 --rc geninfo_unexecuted_blocks=1 00:36:15.188 00:36:15.188 ' 00:36:15.188 07:45:42 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:15.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.188 --rc genhtml_branch_coverage=1 00:36:15.188 --rc genhtml_function_coverage=1 00:36:15.188 --rc genhtml_legend=1 00:36:15.188 --rc geninfo_all_blocks=1 00:36:15.188 --rc geninfo_unexecuted_blocks=1 00:36:15.188 00:36:15.188 ' 00:36:15.188 07:45:42 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.188 07:45:42 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.188 07:45:42 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.188 07:45:42 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.188 07:45:42 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.188 07:45:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.188 07:45:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.188 07:45:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.188 07:45:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:15.188 07:45:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:15.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:15.188 07:45:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:15.188 07:45:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:15.188 07:45:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:15.188 07:45:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:15.188 07:45:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:15.188 07:45:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Zf3LgaHvlx 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Zf3LgaHvlx 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Zf3LgaHvlx 00:36:15.188 07:45:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Zf3LgaHvlx 00:36:15.188 07:45:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gRNzm9QAq6 00:36:15.188 07:45:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:15.188 07:45:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:15.189 07:45:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:15.189 07:45:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gRNzm9QAq6 00:36:15.189 07:45:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gRNzm9QAq6 00:36:15.189 07:45:43 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.gRNzm9QAq6 00:36:15.189 07:45:43 keyring_file -- keyring/file.sh@30 -- # tgtpid=1014588 00:36:15.189 07:45:43 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1014588 00:36:15.189 07:45:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1014588 ']' 00:36:15.189 07:45:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.189 07:45:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.189 07:45:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.189 07:45:43 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:15.189 07:45:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.189 07:45:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:15.189 [2024-11-26 07:45:43.087813] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:36:15.189 [2024-11-26 07:45:43.087866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014588 ] 00:36:15.189 [2024-11-26 07:45:43.148395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.189 [2024-11-26 07:45:43.191175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.448 07:45:43 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.448 07:45:43 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:15.448 07:45:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:15.448 07:45:43 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.448 07:45:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:15.448 [2024-11-26 07:45:43.398765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.448 null0 00:36:15.448 [2024-11-26 07:45:43.430822] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:15.448 [2024-11-26 07:45:43.431180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:15.448 07:45:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.448 07:45:43 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:15.448 07:45:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:15.448 07:45:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:15.449 [2024-11-26 07:45:43.458882] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:15.449 request: 00:36:15.449 { 00:36:15.449 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.449 "secure_channel": false, 00:36:15.449 "listen_address": { 00:36:15.449 "trtype": "tcp", 00:36:15.449 "traddr": "127.0.0.1", 00:36:15.449 "trsvcid": "4420" 00:36:15.449 }, 00:36:15.449 "method": "nvmf_subsystem_add_listener", 00:36:15.449 "req_id": 1 00:36:15.449 } 00:36:15.449 Got JSON-RPC error response 00:36:15.449 response: 00:36:15.449 { 00:36:15.449 "code": -32602, 00:36:15.449 "message": "Invalid parameters" 00:36:15.449 } 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:15.449 07:45:43 keyring_file -- keyring/file.sh@47 -- # bperfpid=1014653 00:36:15.449 07:45:43 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1014653 /var/tmp/bperf.sock 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1014653 ']' 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.449 07:45:43 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:15.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.449 07:45:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:15.449 [2024-11-26 07:45:43.510718] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:36:15.449 [2024-11-26 07:45:43.510763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014653 ] 00:36:15.708 [2024-11-26 07:45:43.571859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.708 [2024-11-26 07:45:43.614220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.708 07:45:43 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.708 07:45:43 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:15.708 07:45:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zf3LgaHvlx 00:36:15.708 07:45:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zf3LgaHvlx 00:36:15.967 07:45:43 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gRNzm9QAq6 00:36:15.968 07:45:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gRNzm9QAq6 00:36:16.226 07:45:44 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:16.226 07:45:44 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:16.226 07:45:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.226 07:45:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:16.226 07:45:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.226 07:45:44 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Zf3LgaHvlx == \/\t\m\p\/\t\m\p\.\Z\f\3\L\g\a\H\v\l\x ]] 00:36:16.226 07:45:44 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:16.226 07:45:44 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:16.226 07:45:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.226 07:45:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:16.226 07:45:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.486 07:45:44 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.gRNzm9QAq6 == \/\t\m\p\/\t\m\p\.\g\R\N\z\m\9\Q\A\q\6 ]] 00:36:16.486 07:45:44 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:16.486 07:45:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:16.486 07:45:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:16.486 07:45:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.486 07:45:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:16.486 07:45:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.745 07:45:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:16.745 07:45:44 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:16.745 07:45:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:16.745 07:45:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:16.745 07:45:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.745 07:45:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:16.745 07:45:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.004 07:45:44 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:17.004 07:45:44 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.004 07:45:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.004 [2024-11-26 07:45:45.048091] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:17.263 nvme0n1 00:36:17.263 07:45:45 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:17.263 07:45:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:17.263 07:45:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:17.263 07:45:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.263 07:45:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:17.263 07:45:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.263 07:45:45 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:17.263 07:45:45 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:17.263 07:45:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:17.263 07:45:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:17.263 07:45:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.263 07:45:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:17.263 07:45:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.521 07:45:45 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:17.521 07:45:45 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:17.521 Running I/O for 1 seconds... 00:36:18.900 18442.00 IOPS, 72.04 MiB/s 00:36:18.900 Latency(us) 00:36:18.900 [2024-11-26T06:45:47.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:18.900 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:18.900 nvme0n1 : 1.00 18485.37 72.21 0.00 0.00 6911.83 4502.04 17780.20 00:36:18.900 [2024-11-26T06:45:47.000Z] =================================================================================================================== 00:36:18.900 [2024-11-26T06:45:47.000Z] Total : 18485.37 72.21 0.00 0.00 6911.83 4502.04 17780.20 00:36:18.900 { 00:36:18.900 "results": [ 00:36:18.900 { 00:36:18.900 "job": "nvme0n1", 00:36:18.900 "core_mask": "0x2", 00:36:18.900 "workload": "randrw", 00:36:18.900 "percentage": 50, 00:36:18.900 "status": "finished", 00:36:18.900 "queue_depth": 128, 00:36:18.900 "io_size": 4096, 00:36:18.900 "runtime": 1.004578, 00:36:18.900 "iops": 18485.373958020184, 00:36:18.900 "mibps": 72.20849202351634, 00:36:18.900 "io_failed": 0, 00:36:18.900 "io_timeout": 0, 00:36:18.900 "avg_latency_us": 6911.830016623353, 00:36:18.900 "min_latency_us": 4502.038260869565, 00:36:18.900 "max_latency_us": 17780.201739130436 00:36:18.900 } 00:36:18.900 ], 00:36:18.900 "core_count": 1 00:36:18.900 } 00:36:18.900 07:45:46 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:18.900 07:45:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:18.900 07:45:46 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:18.900 07:45:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:18.900 07:45:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:18.900 07:45:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.900 07:45:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.900 07:45:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.160 07:45:47 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:19.160 07:45:47 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:19.160 07:45:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:19.160 07:45:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.160 07:45:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.160 07:45:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:19.160 07:45:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.160 07:45:47 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:19.160 07:45:47 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:19.160 07:45:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:19.160 07:45:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:19.160 07:45:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:19.160 07:45:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:19.160 07:45:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:19.160 07:45:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:19.160 07:45:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:19.160 07:45:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:19.419 [2024-11-26 07:45:47.433224] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:19.419 [2024-11-26 07:45:47.433737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x259cd20 (107): Transport endpoint is not connected 00:36:19.419 [2024-11-26 07:45:47.434732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x259cd20 (9): Bad file descriptor 00:36:19.419 [2024-11-26 07:45:47.435733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:19.419 [2024-11-26 07:45:47.435744] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:19.419 [2024-11-26 07:45:47.435752] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:19.419 [2024-11-26 07:45:47.435761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:19.419 request: 00:36:19.419 { 00:36:19.419 "name": "nvme0", 00:36:19.419 "trtype": "tcp", 00:36:19.419 "traddr": "127.0.0.1", 00:36:19.419 "adrfam": "ipv4", 00:36:19.419 "trsvcid": "4420", 00:36:19.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:19.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:19.419 "prchk_reftag": false, 00:36:19.419 "prchk_guard": false, 00:36:19.419 "hdgst": false, 00:36:19.419 "ddgst": false, 00:36:19.419 "psk": "key1", 00:36:19.419 "allow_unrecognized_csi": false, 00:36:19.419 "method": "bdev_nvme_attach_controller", 00:36:19.419 "req_id": 1 00:36:19.419 } 00:36:19.419 Got JSON-RPC error response 00:36:19.419 response: 00:36:19.419 { 00:36:19.419 "code": -5, 00:36:19.419 "message": "Input/output error" 00:36:19.419 } 00:36:19.419 07:45:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:19.419 07:45:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:19.419 07:45:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:19.419 07:45:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:19.419 07:45:47 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:19.419 07:45:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:19.419 07:45:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.419 07:45:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.419 07:45:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.419 07:45:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.677 07:45:47 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:19.677 07:45:47 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:19.677 07:45:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:19.677 07:45:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.677 07:45:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.677 07:45:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:19.677 07:45:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.936 07:45:47 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:19.936 07:45:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:19.936 07:45:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:20.195 07:45:48 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:20.195 07:45:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:20.195 07:45:48 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:20.195 07:45:48 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:20.195 07:45:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.454 07:45:48 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:20.454 07:45:48 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Zf3LgaHvlx 00:36:20.454 07:45:48 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zf3LgaHvlx 00:36:20.454 07:45:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:20.454 07:45:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zf3LgaHvlx 00:36:20.454 07:45:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:20.454 07:45:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:20.454 07:45:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:20.454 07:45:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:20.454 07:45:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zf3LgaHvlx 00:36:20.454 07:45:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zf3LgaHvlx 00:36:20.713 [2024-11-26 07:45:48.617839] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Zf3LgaHvlx': 0100660 00:36:20.713 [2024-11-26 07:45:48.617865] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:20.713 request: 00:36:20.713 { 00:36:20.713 "name": "key0", 00:36:20.713 "path": "/tmp/tmp.Zf3LgaHvlx", 00:36:20.713 "method": "keyring_file_add_key", 00:36:20.713 "req_id": 1 00:36:20.713 } 00:36:20.713 Got JSON-RPC error response 00:36:20.713 response: 00:36:20.713 { 00:36:20.713 "code": -1, 00:36:20.713 "message": "Operation not permitted" 00:36:20.713 } 00:36:20.713 07:45:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:20.713 07:45:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:20.714 07:45:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:20.714 07:45:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:20.714 07:45:48 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Zf3LgaHvlx 00:36:20.714 07:45:48 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Zf3LgaHvlx 00:36:20.714 07:45:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Zf3LgaHvlx 00:36:20.973 07:45:48 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Zf3LgaHvlx 00:36:20.973 07:45:48 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:20.973 07:45:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:20.973 07:45:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.973 07:45:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.973 07:45:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.973 07:45:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:20.973 07:45:49 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:20.973 07:45:49 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:20.973 07:45:49 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:20.973 07:45:49 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:20.973 07:45:49 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:20.973 07:45:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:20.973 07:45:49 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:20.973 07:45:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:20.973 07:45:49 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:20.973 07:45:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:21.233 [2024-11-26 07:45:49.203399] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Zf3LgaHvlx': No such file or directory 00:36:21.233 [2024-11-26 07:45:49.203421] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:21.233 [2024-11-26 07:45:49.203436] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:21.233 [2024-11-26 07:45:49.203444] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:21.233 [2024-11-26 07:45:49.203451] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:21.233 [2024-11-26 07:45:49.203457] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:21.233 request: 00:36:21.233 { 00:36:21.233 "name": "nvme0", 00:36:21.233 "trtype": "tcp", 00:36:21.233 "traddr": "127.0.0.1", 00:36:21.233 "adrfam": "ipv4", 00:36:21.233 "trsvcid": "4420", 00:36:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:21.233 "prchk_reftag": false, 00:36:21.233 "prchk_guard": false, 00:36:21.233 "hdgst": false, 00:36:21.233 "ddgst": false, 00:36:21.233 "psk": "key0", 00:36:21.233 "allow_unrecognized_csi": false, 00:36:21.233 "method": "bdev_nvme_attach_controller", 00:36:21.233 "req_id": 1 00:36:21.233 } 00:36:21.233 Got JSON-RPC error response 00:36:21.233 response: 00:36:21.233 { 00:36:21.233 "code": -19, 00:36:21.233 "message": "No such device" 00:36:21.233 } 00:36:21.233 07:45:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:21.233 07:45:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:21.233 07:45:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:21.233 07:45:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:21.233 07:45:49 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:21.233 07:45:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:21.493 07:45:49 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:21.493 07:45:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:21.493 07:45:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:21.493 07:45:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:21.493 07:45:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:21.493 07:45:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:21.493 07:45:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CTRQg9Aqh8 00:36:21.493 07:45:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:21.493 07:45:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:21.493 07:45:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:21.493 07:45:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:21.493 07:45:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:21.493 07:45:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:21.493 07:45:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:21.493 07:45:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CTRQg9Aqh8 00:36:21.493 07:45:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CTRQg9Aqh8 00:36:21.493 07:45:49 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.CTRQg9Aqh8 00:36:21.493 07:45:49 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CTRQg9Aqh8 00:36:21.493 07:45:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CTRQg9Aqh8 00:36:21.753 07:45:49 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:21.753 07:45:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:22.023 nvme0n1 00:36:22.023 07:45:49 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:22.023 07:45:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.023 07:45:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.023 07:45:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.023 07:45:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:22.023 07:45:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.023 07:45:50 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:22.023 07:45:50 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:22.023 07:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:22.285 07:45:50 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:22.285 07:45:50 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:22.285 07:45:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.285 07:45:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:22.285 07:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.544 07:45:50 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:22.544 07:45:50 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:22.544 07:45:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.544 07:45:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.544 07:45:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.544 07:45:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:22.544 07:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.803 07:45:50 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:22.803 07:45:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:22.803 07:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:22.803 07:45:50 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:22.803 07:45:50 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:22.803 07:45:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.063 07:45:51 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:23.063 07:45:51 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CTRQg9Aqh8 00:36:23.063 07:45:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CTRQg9Aqh8 00:36:23.321 07:45:51 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gRNzm9QAq6 00:36:23.321 07:45:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gRNzm9QAq6 00:36:23.580 07:45:51 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:23.580 07:45:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:23.839 nvme0n1 00:36:23.839 07:45:51 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:23.839 07:45:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:24.099 07:45:51 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:24.099 "subsystems": [ 00:36:24.099 { 00:36:24.099 "subsystem": "keyring", 00:36:24.099 "config": [ 00:36:24.099 { 00:36:24.099 "method": "keyring_file_add_key", 00:36:24.099 "params": { 00:36:24.099 "name": "key0", 00:36:24.099 "path": "/tmp/tmp.CTRQg9Aqh8" 00:36:24.099 } 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "method": "keyring_file_add_key", 00:36:24.099 "params": { 00:36:24.099 "name": "key1", 00:36:24.099 "path": "/tmp/tmp.gRNzm9QAq6" 00:36:24.099 } 00:36:24.099 } 00:36:24.099 ] 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "subsystem": "iobuf", 00:36:24.099 "config": [ 00:36:24.099 { 00:36:24.099 "method": "iobuf_set_options", 00:36:24.099 "params": { 00:36:24.099 "small_pool_count": 8192, 00:36:24.099 "large_pool_count": 1024, 00:36:24.099 "small_bufsize": 8192, 00:36:24.099 "large_bufsize": 135168, 00:36:24.099 "enable_numa": false 00:36:24.099 } 00:36:24.099 } 00:36:24.099 ] 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "subsystem": "sock", 00:36:24.099 "config": [ 00:36:24.099 { 00:36:24.099 "method": "sock_set_default_impl", 00:36:24.099 "params": { 00:36:24.099 "impl_name": "posix" 00:36:24.099 } 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "method": "sock_impl_set_options", 00:36:24.099 "params": { 00:36:24.099 "impl_name": "ssl", 00:36:24.099 "recv_buf_size": 4096, 00:36:24.099 "send_buf_size": 4096, 00:36:24.099 "enable_recv_pipe": true, 00:36:24.099 "enable_quickack": false, 00:36:24.099 "enable_placement_id": 0, 00:36:24.099 "enable_zerocopy_send_server": true, 00:36:24.099 "enable_zerocopy_send_client": false, 00:36:24.099 "zerocopy_threshold": 0, 00:36:24.099 "tls_version": 0, 00:36:24.099 "enable_ktls": false 00:36:24.099 } 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "method": "sock_impl_set_options", 00:36:24.099 "params": { 00:36:24.099 "impl_name": "posix", 00:36:24.099 "recv_buf_size": 2097152, 00:36:24.099 "send_buf_size": 2097152, 00:36:24.099 "enable_recv_pipe": true, 00:36:24.099 "enable_quickack": false, 00:36:24.099 "enable_placement_id": 0, 00:36:24.099 "enable_zerocopy_send_server": true, 00:36:24.099 "enable_zerocopy_send_client": false, 00:36:24.099 "zerocopy_threshold": 0, 00:36:24.099 "tls_version": 0, 00:36:24.099 "enable_ktls": false 00:36:24.099 } 00:36:24.099 } 00:36:24.099 ] 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "subsystem": "vmd", 00:36:24.099 "config": [] 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "subsystem": "accel", 00:36:24.099 "config": [ 00:36:24.099 { 00:36:24.099 "method": "accel_set_options", 00:36:24.099 "params": { 00:36:24.099 "small_cache_size": 128, 00:36:24.099 "large_cache_size": 16, 00:36:24.099 "task_count": 2048, 00:36:24.099 "sequence_count": 2048, 00:36:24.099 "buf_count": 2048 00:36:24.099 } 00:36:24.099 } 00:36:24.099 ] 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "subsystem": "bdev", 00:36:24.099 "config": [ 00:36:24.099 { 00:36:24.099 "method": "bdev_set_options", 00:36:24.099 "params": { 00:36:24.099 "bdev_io_pool_size": 65535, 00:36:24.099 "bdev_io_cache_size": 256, 00:36:24.099 "bdev_auto_examine": true, 00:36:24.099 "iobuf_small_cache_size": 128, 00:36:24.099 "iobuf_large_cache_size": 16 00:36:24.099 } 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "method": "bdev_raid_set_options", 00:36:24.099 "params": { 00:36:24.099 "process_window_size_kb": 1024, 00:36:24.099 "process_max_bandwidth_mb_sec": 0 00:36:24.099 } 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "method": "bdev_iscsi_set_options", 00:36:24.099 "params": { 00:36:24.099 "timeout_sec": 30 00:36:24.099 } 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "method": "bdev_nvme_set_options", 00:36:24.099 "params": { 00:36:24.099 "action_on_timeout": "none", 00:36:24.099 "timeout_us": 0, 00:36:24.099 "timeout_admin_us": 0, 00:36:24.099 "keep_alive_timeout_ms": 10000, 00:36:24.099 "arbitration_burst": 0, 00:36:24.099 "low_priority_weight": 0, 00:36:24.099 "medium_priority_weight": 0, 00:36:24.099 "high_priority_weight": 0, 00:36:24.099 "nvme_adminq_poll_period_us": 10000, 00:36:24.099 "nvme_ioq_poll_period_us": 0, 00:36:24.099 "io_queue_requests": 512, 00:36:24.099 "delay_cmd_submit": true, 00:36:24.099 "transport_retry_count": 4, 00:36:24.099 "bdev_retry_count": 3, 00:36:24.099 "transport_ack_timeout": 0, 00:36:24.099 "ctrlr_loss_timeout_sec": 0, 00:36:24.099 "reconnect_delay_sec": 0, 00:36:24.099 "fast_io_fail_timeout_sec": 0, 00:36:24.099 "disable_auto_failback": false, 00:36:24.099 "generate_uuids": false, 00:36:24.099 "transport_tos": 0, 00:36:24.099 "nvme_error_stat": false, 00:36:24.099 "rdma_srq_size": 0, 00:36:24.099 "io_path_stat": false, 00:36:24.099 "allow_accel_sequence": false, 00:36:24.099 "rdma_max_cq_size": 0, 00:36:24.099 "rdma_cm_event_timeout_ms": 0, 00:36:24.099 "dhchap_digests": [ 00:36:24.099 "sha256", 00:36:24.099 "sha384", 00:36:24.099 "sha512" 00:36:24.099 ], 00:36:24.099 "dhchap_dhgroups": [ 00:36:24.099 "null", 00:36:24.099 "ffdhe2048", 00:36:24.099 "ffdhe3072", 00:36:24.099 "ffdhe4096", 00:36:24.099 "ffdhe6144", 00:36:24.099 "ffdhe8192" 00:36:24.099 ] 00:36:24.099 } 00:36:24.099 }, 00:36:24.099 { 00:36:24.099 "method": "bdev_nvme_attach_controller", 00:36:24.099 "params": { 00:36:24.099 "name": "nvme0", 00:36:24.099 "trtype": "TCP", 00:36:24.099 "adrfam": "IPv4", 00:36:24.099 "traddr": "127.0.0.1", 00:36:24.099 "trsvcid": "4420", 00:36:24.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:24.099 "prchk_reftag": false, 00:36:24.099 "prchk_guard": false, 00:36:24.099 "ctrlr_loss_timeout_sec": 0, 00:36:24.099 "reconnect_delay_sec": 0, 00:36:24.099 "fast_io_fail_timeout_sec": 0, 00:36:24.099 "psk": "key0", 00:36:24.100 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:24.100 "hdgst": false, 00:36:24.100 "ddgst": false, 00:36:24.100 "multipath": "multipath" 00:36:24.100 } 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "method": "bdev_nvme_set_hotplug", 00:36:24.100 "params": { 00:36:24.100 "period_us": 100000, 00:36:24.100 "enable": false 00:36:24.100 } 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "method": "bdev_wait_for_examine" 00:36:24.100 } 00:36:24.100 ] 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "subsystem": "nbd", 00:36:24.100 "config": [] 00:36:24.100 } 00:36:24.100 ] 00:36:24.100 }' 00:36:24.100 07:45:51 keyring_file -- keyring/file.sh@115 -- # killprocess 1014653 00:36:24.100 07:45:51 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1014653 ']' 00:36:24.100 07:45:51 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1014653 00:36:24.100 07:45:51 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:24.100 07:45:51 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:24.100 07:45:51 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1014653 00:36:24.100 07:45:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:24.100 07:45:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:24.100 07:45:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1014653' 00:36:24.100 killing process with pid 1014653 00:36:24.100 07:45:52 keyring_file -- common/autotest_common.sh@973 -- # kill 1014653 00:36:24.100 Received shutdown signal, test time was about 1.000000 seconds 00:36:24.100 00:36:24.100 Latency(us) 00:36:24.100 [2024-11-26T06:45:52.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.100 [2024-11-26T06:45:52.200Z] =================================================================================================================== 00:36:24.100 [2024-11-26T06:45:52.200Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:24.100 07:45:52 keyring_file -- common/autotest_common.sh@978 -- # wait 1014653 00:36:24.100 07:45:52 keyring_file -- keyring/file.sh@118 -- # bperfpid=1016169 00:36:24.100 07:45:52 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1016169 /var/tmp/bperf.sock 00:36:24.100 07:45:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1016169 ']' 00:36:24.100 07:45:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:24.100 07:45:52 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:24.100 07:45:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.100 07:45:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:24.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:24.100 07:45:52 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:24.100 "subsystems": [ 00:36:24.100 { 00:36:24.100 "subsystem": "keyring", 00:36:24.100 "config": [ 00:36:24.100 { 00:36:24.100 "method": "keyring_file_add_key", 00:36:24.100 "params": { 00:36:24.100 "name": "key0", 00:36:24.100 "path": "/tmp/tmp.CTRQg9Aqh8" 00:36:24.100 } 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "method": "keyring_file_add_key", 00:36:24.100 "params": { 00:36:24.100 "name": "key1", 00:36:24.100 "path": "/tmp/tmp.gRNzm9QAq6" 00:36:24.100 } 00:36:24.100 } 00:36:24.100 ] 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "subsystem": "iobuf", 00:36:24.100 "config": [ 00:36:24.100 { 00:36:24.100 "method": "iobuf_set_options", 00:36:24.100 "params": { 00:36:24.100 "small_pool_count": 8192, 00:36:24.100 "large_pool_count": 1024, 00:36:24.100 "small_bufsize": 8192, 00:36:24.100 "large_bufsize": 135168, 00:36:24.100 "enable_numa": false 00:36:24.100 } 00:36:24.100 } 00:36:24.100 ] 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "subsystem": "sock", 00:36:24.100 "config": [ 00:36:24.100 { 00:36:24.100 "method": "sock_set_default_impl", 00:36:24.100 "params": { 00:36:24.100 "impl_name": "posix" 00:36:24.100 } 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "method": "sock_impl_set_options", 00:36:24.100 "params": { 00:36:24.100 "impl_name": "ssl", 00:36:24.100 "recv_buf_size": 4096, 00:36:24.100 "send_buf_size": 4096, 00:36:24.100 "enable_recv_pipe": true, 00:36:24.100 "enable_quickack": false, 00:36:24.100 "enable_placement_id": 0, 00:36:24.100 "enable_zerocopy_send_server": true, 00:36:24.100 "enable_zerocopy_send_client": false, 00:36:24.100 "zerocopy_threshold": 0, 00:36:24.100 "tls_version": 0, 00:36:24.100 "enable_ktls": false 00:36:24.100 } 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "method": "sock_impl_set_options", 00:36:24.100 "params": { 00:36:24.100 "impl_name": "posix", 00:36:24.100 "recv_buf_size": 2097152, 00:36:24.100 "send_buf_size": 2097152, 00:36:24.100 "enable_recv_pipe": true, 00:36:24.100 "enable_quickack": false, 00:36:24.100 "enable_placement_id": 0, 00:36:24.100 "enable_zerocopy_send_server": true, 00:36:24.100 "enable_zerocopy_send_client": false, 00:36:24.100 "zerocopy_threshold": 0, 00:36:24.100 "tls_version": 0, 00:36:24.100 "enable_ktls": false 00:36:24.100 } 00:36:24.100 } 00:36:24.100 ] 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "subsystem": "vmd", 00:36:24.100 "config": [] 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "subsystem": "accel", 00:36:24.100 "config": [ 00:36:24.100 { 00:36:24.100 "method": "accel_set_options", 00:36:24.100 "params": { 00:36:24.100 "small_cache_size": 128, 00:36:24.100 "large_cache_size": 16, 00:36:24.100 "task_count": 2048, 00:36:24.100 "sequence_count": 2048, 00:36:24.100 "buf_count": 2048 00:36:24.100 } 00:36:24.100 } 00:36:24.100 ] 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "subsystem": "bdev", 00:36:24.100 "config": [ 00:36:24.100 { 00:36:24.100 "method": "bdev_set_options", 00:36:24.100 "params": { 00:36:24.100 "bdev_io_pool_size": 65535, 00:36:24.100 "bdev_io_cache_size": 256, 00:36:24.100 "bdev_auto_examine": true, 00:36:24.100 "iobuf_small_cache_size": 128, 00:36:24.100 "iobuf_large_cache_size": 16 00:36:24.100 } 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "method": "bdev_raid_set_options", 00:36:24.100 "params": { 00:36:24.100 "process_window_size_kb": 1024, 00:36:24.100 "process_max_bandwidth_mb_sec": 0 00:36:24.100 } 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "method": "bdev_iscsi_set_options", 00:36:24.100 "params": { 00:36:24.100 "timeout_sec": 30 00:36:24.100 } 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "method": "bdev_nvme_set_options", 00:36:24.100 "params": { 00:36:24.100 "action_on_timeout": "none", 00:36:24.100 "timeout_us": 0, 00:36:24.100 "timeout_admin_us": 0, 00:36:24.100 "keep_alive_timeout_ms": 10000, 00:36:24.100 "arbitration_burst": 0, 00:36:24.100 "low_priority_weight": 0, 00:36:24.100 "medium_priority_weight": 0, 00:36:24.100 "high_priority_weight": 0, 00:36:24.100 "nvme_adminq_poll_period_us": 10000, 00:36:24.100 "nvme_ioq_poll_period_us": 0, 00:36:24.100 "io_queue_requests": 512, 00:36:24.100 "delay_cmd_submit": true, 00:36:24.100 "transport_retry_count": 4, 00:36:24.100 "bdev_retry_count": 3, 00:36:24.100 "transport_ack_timeout": 0, 00:36:24.100 "ctrlr_loss_timeout_sec": 0, 00:36:24.100 "reconnect_delay_sec": 0, 00:36:24.100 "fast_io_fail_timeout_sec": 0, 00:36:24.100 "disable_auto_failback": false, 00:36:24.100 "generate_uuids": false, 00:36:24.100 "transport_tos": 0, 00:36:24.100 "nvme_error_stat": false, 00:36:24.100 "rdma_srq_size": 0, 00:36:24.100 "io_path_stat": false, 00:36:24.100 "allow_accel_sequence": false, 00:36:24.100 "rdma_max_cq_size": 0, 00:36:24.100 "rdma_cm_event_timeout_ms": 0, 00:36:24.100 "dhchap_digests": [ 00:36:24.100 "sha256", 00:36:24.100 "sha384", 00:36:24.100 "sha512" 00:36:24.100 ], 00:36:24.100 "dhchap_dhgroups": [ 00:36:24.100 "null", 00:36:24.100 "ffdhe2048", 00:36:24.100 "ffdhe3072", 00:36:24.100 "ffdhe4096", 00:36:24.100 "ffdhe6144", 00:36:24.100 "ffdhe8192" 00:36:24.100 ] 00:36:24.100 } 00:36:24.100 }, 00:36:24.100 { 00:36:24.100 "method": "bdev_nvme_attach_controller", 00:36:24.100 "params": { 00:36:24.100 "name": "nvme0", 00:36:24.101 "trtype": "TCP", 00:36:24.101 "adrfam": "IPv4", 00:36:24.101 "traddr": "127.0.0.1", 00:36:24.101 "trsvcid": "4420", 00:36:24.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:24.101 "prchk_reftag": false, 00:36:24.101 "prchk_guard": false, 00:36:24.101 "ctrlr_loss_timeout_sec": 0, 00:36:24.101 "reconnect_delay_sec": 0, 00:36:24.101 "fast_io_fail_timeout_sec": 0, 00:36:24.101 "psk": "key0", 00:36:24.101 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:24.101 "hdgst": false, 00:36:24.101 "ddgst": false, 00:36:24.101 "multipath": "multipath" 00:36:24.101 } 00:36:24.101 }, 00:36:24.101 { 00:36:24.101 "method": "bdev_nvme_set_hotplug", 00:36:24.101 "params": { 00:36:24.101 "period_us": 100000, 00:36:24.101 "enable": false 00:36:24.101 } 00:36:24.101 }, 00:36:24.101 { 00:36:24.101 "method": "bdev_wait_for_examine" 00:36:24.101 } 00:36:24.101 ] 00:36:24.101 }, 00:36:24.101 { 00:36:24.101 "subsystem": "nbd", 00:36:24.101 "config": [] 00:36:24.101 } 00:36:24.101 ] 00:36:24.101 }' 00:36:24.101 07:45:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.101 07:45:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:24.360 [2024-11-26 07:45:52.214425] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:36:24.360 [2024-11-26 07:45:52.214478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016169 ] 00:36:24.360 [2024-11-26 07:45:52.275781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.360 [2024-11-26 07:45:52.316040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.619 [2024-11-26 07:45:52.477186] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:25.187 07:45:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:25.187 07:45:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:25.187 07:45:53 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:25.187 07:45:53 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:25.187 07:45:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.187 07:45:53 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:25.187 07:45:53 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:25.187 07:45:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:25.187 07:45:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.187 07:45:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.187 07:45:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.187 07:45:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.445 07:45:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:25.445 07:45:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:25.445 07:45:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:25.445 07:45:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.445 07:45:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.445 07:45:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:25.445 07:45:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.704 07:45:53 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:25.704 07:45:53 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:25.704 07:45:53 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:25.704 07:45:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:25.963 07:45:53 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:25.963 07:45:53 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:25.963 07:45:53 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.CTRQg9Aqh8 /tmp/tmp.gRNzm9QAq6 00:36:25.963 07:45:53 keyring_file -- keyring/file.sh@20 -- # killprocess 1016169 00:36:25.963 07:45:53 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1016169 ']' 00:36:25.963 07:45:53 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1016169 00:36:25.963 07:45:53 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:25.963 07:45:53 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:25.963 07:45:53 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1016169 00:36:25.963 07:45:53 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:25.963 07:45:53 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:25.963 07:45:53 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1016169' 00:36:25.963 killing process with pid 1016169 00:36:25.963 07:45:53 keyring_file -- common/autotest_common.sh@973 -- # kill 1016169 00:36:25.963 Received shutdown signal, test time was about 1.000000 seconds 00:36:25.963 00:36:25.963 Latency(us) 00:36:25.963 [2024-11-26T06:45:54.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:25.963 [2024-11-26T06:45:54.063Z] =================================================================================================================== 00:36:25.963 [2024-11-26T06:45:54.063Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:25.963 07:45:53 keyring_file -- common/autotest_common.sh@978 -- # wait 1016169 00:36:26.222 07:45:54 keyring_file -- keyring/file.sh@21 -- # killprocess 1014588 00:36:26.222 07:45:54 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1014588 ']' 00:36:26.222 07:45:54 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1014588 00:36:26.222 07:45:54 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:26.222 07:45:54 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:26.222 07:45:54 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1014588 00:36:26.222 07:45:54 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:26.222 07:45:54 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:26.222 07:45:54 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1014588' 00:36:26.222 killing process with pid 1014588 00:36:26.222 07:45:54 keyring_file -- common/autotest_common.sh@973 -- # kill 1014588 00:36:26.222 07:45:54 keyring_file -- common/autotest_common.sh@978 -- # wait 1014588 00:36:26.482 00:36:26.482 real 0m11.661s 00:36:26.482 user 0m28.995s 00:36:26.482 sys 0m2.692s 00:36:26.482 07:45:54 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:26.482 07:45:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:26.482 ************************************ 00:36:26.482 END TEST keyring_file 00:36:26.482 ************************************ 00:36:26.482 07:45:54 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:26.482 07:45:54 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:26.482 07:45:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:26.482 07:45:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:26.482 07:45:54 -- common/autotest_common.sh@10 -- # set +x 00:36:26.482 ************************************ 00:36:26.482 START TEST keyring_linux 00:36:26.482 ************************************ 00:36:26.482 07:45:54 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:26.482 Joined session keyring: 610175090 00:36:26.482 * Looking for test storage... 00:36:26.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:26.482 07:45:54 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:26.742 07:45:54 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:36:26.742 07:45:54 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:26.742 07:45:54 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:26.742 07:45:54 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:26.742 07:45:54 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.742 --rc genhtml_branch_coverage=1 00:36:26.742 --rc genhtml_function_coverage=1 00:36:26.742 --rc genhtml_legend=1 00:36:26.742 --rc geninfo_all_blocks=1 00:36:26.742 --rc geninfo_unexecuted_blocks=1 00:36:26.742 00:36:26.742 ' 00:36:26.742 07:45:54 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.742 --rc genhtml_branch_coverage=1 00:36:26.742 --rc genhtml_function_coverage=1 00:36:26.742 --rc genhtml_legend=1 00:36:26.742 --rc geninfo_all_blocks=1 00:36:26.742 --rc geninfo_unexecuted_blocks=1 00:36:26.742 00:36:26.742 ' 00:36:26.742 07:45:54 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.742 --rc genhtml_branch_coverage=1 00:36:26.742 --rc genhtml_function_coverage=1 00:36:26.742 --rc genhtml_legend=1 00:36:26.742 --rc geninfo_all_blocks=1 00:36:26.742 --rc geninfo_unexecuted_blocks=1 00:36:26.742 00:36:26.742 ' 00:36:26.742 07:45:54 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.742 --rc genhtml_branch_coverage=1 00:36:26.742 --rc genhtml_function_coverage=1 00:36:26.742 --rc genhtml_legend=1 00:36:26.742 --rc geninfo_all_blocks=1 00:36:26.742 --rc geninfo_unexecuted_blocks=1 00:36:26.742 00:36:26.742 ' 00:36:26.742 07:45:54 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:26.742 07:45:54 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:26.742 07:45:54 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:26.742 07:45:54 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.742 07:45:54 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.742 07:45:54 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.742 07:45:54 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:26.742 07:45:54 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:26.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:26.742 07:45:54 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:26.742 07:45:54 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:26.742 07:45:54 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:26.742 07:45:54 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:26.743 07:45:54 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:26.743 07:45:54 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:26.743 07:45:54 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:26.743 07:45:54 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:26.743 /tmp/:spdk-test:key0 00:36:26.743 07:45:54 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:26.743 07:45:54 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:26.743 07:45:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:26.743 /tmp/:spdk-test:key1 00:36:26.743 07:45:54 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1016669 00:36:26.743 07:45:54 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1016669 00:36:26.743 07:45:54 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:26.743 07:45:54 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1016669 ']' 00:36:26.743 07:45:54 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:26.743 07:45:54 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:26.743 07:45:54 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:26.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:26.743 07:45:54 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:26.743 07:45:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:26.743 [2024-11-26 07:45:54.808288] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:36:26.743 [2024-11-26 07:45:54.808341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016669 ] 00:36:27.002 [2024-11-26 07:45:54.870495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.002 [2024-11-26 07:45:54.913242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:27.261 07:45:55 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:27.261 07:45:55 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:27.261 07:45:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:27.261 07:45:55 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.261 07:45:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:27.261 [2024-11-26 07:45:55.120748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:27.261 null0 00:36:27.261 [2024-11-26 07:45:55.152807] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:27.261 [2024-11-26 07:45:55.153164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:27.261 07:45:55 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.261 07:45:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:27.261 812303972 00:36:27.261 07:45:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:27.261 119705647 00:36:27.261 07:45:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1016733 00:36:27.261 07:45:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1016733 /var/tmp/bperf.sock 00:36:27.261 07:45:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:27.261 07:45:55 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1016733 ']' 00:36:27.261 07:45:55 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:27.261 07:45:55 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:27.261 07:45:55 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:27.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:27.261 07:45:55 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:27.261 07:45:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:27.261 [2024-11-26 07:45:55.225617] Starting SPDK v25.01-pre git sha1 9c7e54d62 / DPDK 24.03.0 initialization... 00:36:27.261 [2024-11-26 07:45:55.225661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016733 ] 00:36:27.261 [2024-11-26 07:45:55.287515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.261 [2024-11-26 07:45:55.330246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.520 07:45:55 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:27.520 07:45:55 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:27.520 07:45:55 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:27.520 07:45:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:27.520 07:45:55 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:27.520 07:45:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:27.779 07:45:55 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:27.779 07:45:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:28.038 [2024-11-26 07:45:55.974088] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:28.038 nvme0n1 00:36:28.038 07:45:56 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:28.038 07:45:56 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:28.038 07:45:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:28.038 07:45:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:28.038 07:45:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:28.038 07:45:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.297 07:45:56 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:28.297 07:45:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:28.297 07:45:56 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:28.297 07:45:56 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:28.297 07:45:56 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.297 07:45:56 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:28.297 07:45:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.557 07:45:56 keyring_linux -- keyring/linux.sh@25 -- # sn=812303972 00:36:28.557 07:45:56 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:28.557 07:45:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:28.557 07:45:56 keyring_linux -- keyring/linux.sh@26 -- # [[ 812303972 == \8\1\2\3\0\3\9\7\2 ]] 00:36:28.557 07:45:56 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 812303972 00:36:28.557 07:45:56 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:28.557 07:45:56 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:28.557 Running I/O for 1 seconds... 00:36:29.494 20866.00 IOPS, 81.51 MiB/s 00:36:29.494 Latency(us) 00:36:29.494 [2024-11-26T06:45:57.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.494 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:29.494 nvme0n1 : 1.01 20862.40 81.49 0.00 0.00 6114.42 2008.82 7465.41 00:36:29.494 [2024-11-26T06:45:57.594Z] =================================================================================================================== 00:36:29.494 [2024-11-26T06:45:57.594Z] Total : 20862.40 81.49 0.00 0.00 6114.42 2008.82 7465.41 00:36:29.494 { 00:36:29.494 "results": [ 00:36:29.494 { 00:36:29.494 "job": "nvme0n1", 00:36:29.494 "core_mask": "0x2", 00:36:29.494 "workload": "randread", 00:36:29.494 "status": "finished", 00:36:29.494 "queue_depth": 128, 00:36:29.494 "io_size": 4096, 00:36:29.494 "runtime": 1.006356, 00:36:29.494 "iops": 20862.398594533148, 00:36:29.494 "mibps": 81.49374450989511, 00:36:29.494 "io_failed": 0, 00:36:29.494 "io_timeout": 0, 00:36:29.494 "avg_latency_us": 6114.416444163725, 00:36:29.494 "min_latency_us": 2008.8208695652174, 00:36:29.494 "max_latency_us": 7465.405217391304 00:36:29.494 } 00:36:29.494 ], 00:36:29.494 "core_count": 1 00:36:29.494 } 00:36:29.494 07:45:57 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:29.494 07:45:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:29.753 07:45:57 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:29.753 07:45:57 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:29.754 07:45:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:29.754 07:45:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:29.754 07:45:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:29.754 07:45:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.013 07:45:57 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:30.013 07:45:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:30.013 07:45:57 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:30.013 07:45:57 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:30.013 07:45:57 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:30.013 07:45:57 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:30.013 07:45:57 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:30.013 07:45:57 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:30.013 07:45:57 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:30.013 07:45:57 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:30.013 07:45:57 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:30.013 07:45:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:30.272 [2024-11-26 07:45:58.171753] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:30.272 [2024-11-26 07:45:58.172708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de7a40 (107): Transport endpoint is not connected 00:36:30.272 [2024-11-26 07:45:58.173703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de7a40 (9): Bad file descriptor 00:36:30.272 [2024-11-26 07:45:58.174705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:30.272 [2024-11-26 07:45:58.174715] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:30.272 [2024-11-26 07:45:58.174722] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:30.272 [2024-11-26 07:45:58.174732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:30.272 request: 00:36:30.272 { 00:36:30.272 "name": "nvme0", 00:36:30.272 "trtype": "tcp", 00:36:30.272 "traddr": "127.0.0.1", 00:36:30.272 "adrfam": "ipv4", 00:36:30.272 "trsvcid": "4420", 00:36:30.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:30.272 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:30.272 "prchk_reftag": false, 00:36:30.272 "prchk_guard": false, 00:36:30.272 "hdgst": false, 00:36:30.272 "ddgst": false, 00:36:30.272 "psk": ":spdk-test:key1", 00:36:30.272 "allow_unrecognized_csi": false, 00:36:30.272 "method": "bdev_nvme_attach_controller", 00:36:30.272 "req_id": 1 00:36:30.272 } 00:36:30.272 Got JSON-RPC error response 00:36:30.272 response: 00:36:30.272 { 00:36:30.272 "code": -5, 00:36:30.272 "message": "Input/output error" 00:36:30.272 } 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@33 -- # sn=812303972 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 812303972 00:36:30.272 1 links removed 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@33 -- # sn=119705647 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 119705647 00:36:30.272 1 links removed 00:36:30.272 07:45:58 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1016733 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1016733 ']' 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1016733 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1016733 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1016733' 00:36:30.272 killing process with pid 1016733 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@973 -- # kill 1016733 00:36:30.272 Received shutdown signal, test time was about 1.000000 seconds 00:36:30.272 00:36:30.272 Latency(us) 00:36:30.272 [2024-11-26T06:45:58.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.272 [2024-11-26T06:45:58.372Z] =================================================================================================================== 00:36:30.272 [2024-11-26T06:45:58.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:30.272 07:45:58 keyring_linux -- common/autotest_common.sh@978 -- # wait 1016733 00:36:30.530 07:45:58 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1016669 00:36:30.530 07:45:58 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1016669 ']' 00:36:30.530 07:45:58 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1016669 00:36:30.530 07:45:58 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:30.530 07:45:58 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:30.530 07:45:58 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1016669 00:36:30.531 07:45:58 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:30.531 07:45:58 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:30.531 07:45:58 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1016669' 00:36:30.531 killing process with pid 1016669 00:36:30.531 07:45:58 keyring_linux -- common/autotest_common.sh@973 -- # kill 1016669 00:36:30.531 07:45:58 keyring_linux -- common/autotest_common.sh@978 -- # wait 1016669 00:36:30.789 00:36:30.789 real 0m4.260s 00:36:30.789 user 0m8.043s 00:36:30.789 sys 0m1.372s 00:36:30.789 07:45:58 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:30.789 07:45:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:30.789 ************************************ 00:36:30.789 END TEST keyring_linux 00:36:30.789 ************************************ 00:36:30.789 07:45:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:30.789 07:45:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:30.789 07:45:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:30.789 07:45:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:30.789 07:45:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:30.789 07:45:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:30.789 07:45:58 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:30.789 07:45:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:30.789 07:45:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:30.789 07:45:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:30.789 07:45:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:30.789 07:45:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:30.789 07:45:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:30.789 07:45:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:30.789 07:45:58 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:30.789 07:45:58 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:30.789 07:45:58 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:30.789 07:45:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:30.789 07:45:58 -- common/autotest_common.sh@10 -- # set +x 00:36:30.789 07:45:58 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:30.789 07:45:58 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:30.789 07:45:58 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:30.789 07:45:58 -- common/autotest_common.sh@10 -- # set +x 00:36:36.062 INFO: APP EXITING 00:36:36.062 INFO: killing all VMs 00:36:36.062 INFO: killing vhost app 00:36:36.062 INFO: EXIT DONE 00:36:37.968 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:37.968 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:37.968 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:37.968 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:37.968 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:37.968 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:37.968 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:37.968 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:38.228 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:38.228 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:38.228 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:38.228 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:38.228 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:38.228 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:38.228 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:38.228 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:38.228 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:40.766 Cleaning 00:36:40.766 Removing: /var/run/dpdk/spdk0/config 00:36:40.766 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:40.767 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:40.767 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:40.767 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:40.767 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:40.767 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:40.767 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:40.767 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:40.767 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:40.767 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:40.767 Removing: /var/run/dpdk/spdk1/config 00:36:40.767 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:40.767 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:40.767 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:40.767 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:40.767 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:40.767 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:40.767 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:40.767 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:40.767 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:40.767 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:40.767 Removing: /var/run/dpdk/spdk2/config 00:36:40.767 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:40.767 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:40.767 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:40.767 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:40.767 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:40.767 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:40.767 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:40.767 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:40.767 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:40.767 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:40.767 Removing: /var/run/dpdk/spdk3/config 00:36:40.767 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:40.767 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:40.767 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:40.767 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:40.767 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:40.767 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:40.767 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:40.767 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:40.767 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:40.767 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:40.767 Removing: /var/run/dpdk/spdk4/config 00:36:40.767 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:40.767 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:40.767 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:40.767 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:40.767 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:40.767 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:40.767 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:40.767 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:40.767 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:40.767 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:40.767 Removing: /dev/shm/bdev_svc_trace.1 00:36:40.767 Removing: /dev/shm/nvmf_trace.0 00:36:40.767 Removing: /dev/shm/spdk_tgt_trace.pid544616 00:36:40.767 Removing: /var/run/dpdk/spdk0 00:36:40.767 Removing: /var/run/dpdk/spdk1 00:36:40.767 Removing: /var/run/dpdk/spdk2 00:36:40.767 Removing: /var/run/dpdk/spdk3 00:36:40.767 Removing: /var/run/dpdk/spdk4 00:36:40.767 Removing: /var/run/dpdk/spdk_pid1006523 00:36:40.767 Removing: /var/run/dpdk/spdk_pid1007175 00:36:40.767 Removing: /var/run/dpdk/spdk_pid1007637 00:36:40.767 Removing: /var/run/dpdk/spdk_pid1009902 00:36:40.767 Removing: /var/run/dpdk/spdk_pid1010371 00:36:40.767 Removing: /var/run/dpdk/spdk_pid1010858 00:36:40.767 Removing: /var/run/dpdk/spdk_pid1014588 00:36:40.767 Removing: /var/run/dpdk/spdk_pid1014653 00:36:40.767 Removing: /var/run/dpdk/spdk_pid1016169 00:36:40.767 Removing: /var/run/dpdk/spdk_pid1016669 00:36:40.767 Removing: /var/run/dpdk/spdk_pid1016733 00:36:40.767 Removing: /var/run/dpdk/spdk_pid422696 00:36:40.767 Removing: /var/run/dpdk/spdk_pid542118 00:36:40.767 Removing: /var/run/dpdk/spdk_pid543289 00:36:40.767 Removing: /var/run/dpdk/spdk_pid544616 00:36:40.767 Removing: /var/run/dpdk/spdk_pid545400 00:36:40.767 Removing: /var/run/dpdk/spdk_pid546344 00:36:40.767 Removing: /var/run/dpdk/spdk_pid546369 00:36:40.767 Removing: /var/run/dpdk/spdk_pid547341 00:36:40.767 Removing: /var/run/dpdk/spdk_pid547563 00:36:40.767 Removing: /var/run/dpdk/spdk_pid547758 00:36:40.767 Removing: /var/run/dpdk/spdk_pid549430 00:36:40.767 Removing: /var/run/dpdk/spdk_pid550711 00:36:40.767 Removing: /var/run/dpdk/spdk_pid551002 00:36:40.767 Removing: /var/run/dpdk/spdk_pid551290 00:36:40.767 Removing: /var/run/dpdk/spdk_pid551594 00:36:40.767 Removing: /var/run/dpdk/spdk_pid551884 00:36:40.767 Removing: /var/run/dpdk/spdk_pid552133 00:36:40.767 Removing: /var/run/dpdk/spdk_pid552362 00:36:40.767 Removing: /var/run/dpdk/spdk_pid552675 00:36:40.767 Removing: /var/run/dpdk/spdk_pid553415 00:36:40.767 Removing: /var/run/dpdk/spdk_pid556413 00:36:40.767 Removing: /var/run/dpdk/spdk_pid556671 00:36:40.767 Removing: /var/run/dpdk/spdk_pid556709 00:36:40.767 Removing: /var/run/dpdk/spdk_pid556875 00:36:40.767 Removing: /var/run/dpdk/spdk_pid557214 00:36:40.767 Removing: /var/run/dpdk/spdk_pid557380 00:36:40.767 Removing: /var/run/dpdk/spdk_pid557711 00:36:40.767 Removing: /var/run/dpdk/spdk_pid557853 00:36:40.767 Removing: /var/run/dpdk/spdk_pid558189 00:36:40.767 Removing: /var/run/dpdk/spdk_pid558200 00:36:40.767 Removing: /var/run/dpdk/spdk_pid558458 00:36:40.767 Removing: /var/run/dpdk/spdk_pid558466 00:36:40.767 Removing: /var/run/dpdk/spdk_pid559031 00:36:40.767 Removing: /var/run/dpdk/spdk_pid559272 00:36:40.767 Removing: /var/run/dpdk/spdk_pid559578 00:36:40.767 Removing: /var/run/dpdk/spdk_pid563276 00:36:40.767 Removing: /var/run/dpdk/spdk_pid567407 00:36:40.767 Removing: /var/run/dpdk/spdk_pid577590 00:36:40.767 Removing: /var/run/dpdk/spdk_pid578277 00:36:40.767 Removing: /var/run/dpdk/spdk_pid582354 00:36:40.767 Removing: /var/run/dpdk/spdk_pid582802 00:36:40.767 Removing: /var/run/dpdk/spdk_pid587089 00:36:40.767 Removing: /var/run/dpdk/spdk_pid593246 00:36:40.767 Removing: /var/run/dpdk/spdk_pid595850 00:36:40.767 Removing: /var/run/dpdk/spdk_pid605943 00:36:40.767 Removing: /var/run/dpdk/spdk_pid614804 00:36:40.767 Removing: /var/run/dpdk/spdk_pid616597 00:36:40.767 Removing: /var/run/dpdk/spdk_pid617526 00:36:40.767 Removing: /var/run/dpdk/spdk_pid634176 00:36:40.767 Removing: /var/run/dpdk/spdk_pid638363 00:36:40.767 Removing: /var/run/dpdk/spdk_pid683066 00:36:40.767 Removing: /var/run/dpdk/spdk_pid688422 00:36:40.767 Removing: /var/run/dpdk/spdk_pid694732 00:36:40.767 Removing: /var/run/dpdk/spdk_pid700997 00:36:40.767 Removing: /var/run/dpdk/spdk_pid700999 00:36:40.767 Removing: /var/run/dpdk/spdk_pid701850 00:36:40.767 Removing: /var/run/dpdk/spdk_pid702613 00:36:40.767 Removing: /var/run/dpdk/spdk_pid703538 00:36:40.767 Removing: /var/run/dpdk/spdk_pid704099 00:36:40.767 Removing: /var/run/dpdk/spdk_pid704230 00:36:40.767 Removing: /var/run/dpdk/spdk_pid704456 00:36:40.767 Removing: /var/run/dpdk/spdk_pid704476 00:36:40.767 Removing: /var/run/dpdk/spdk_pid704493 00:36:40.767 Removing: /var/run/dpdk/spdk_pid705391 00:36:40.767 Removing: /var/run/dpdk/spdk_pid706312 00:36:40.767 Removing: /var/run/dpdk/spdk_pid707226 00:36:40.767 Removing: /var/run/dpdk/spdk_pid707694 00:36:40.767 Removing: /var/run/dpdk/spdk_pid707699 00:36:40.767 Removing: /var/run/dpdk/spdk_pid708035 00:36:40.767 Removing: /var/run/dpdk/spdk_pid709166 00:36:41.027 Removing: /var/run/dpdk/spdk_pid710143 00:36:41.027 Removing: /var/run/dpdk/spdk_pid718232 00:36:41.027 Removing: /var/run/dpdk/spdk_pid747152 00:36:41.027 Removing: /var/run/dpdk/spdk_pid751590 00:36:41.027 Removing: /var/run/dpdk/spdk_pid753274 00:36:41.027 Removing: /var/run/dpdk/spdk_pid755011 00:36:41.027 Removing: /var/run/dpdk/spdk_pid755133 00:36:41.027 Removing: /var/run/dpdk/spdk_pid755362 00:36:41.027 Removing: /var/run/dpdk/spdk_pid755379 00:36:41.027 Removing: /var/run/dpdk/spdk_pid755884 00:36:41.027 Removing: /var/run/dpdk/spdk_pid757718 00:36:41.027 Removing: /var/run/dpdk/spdk_pid758481 00:36:41.027 Removing: /var/run/dpdk/spdk_pid758977 00:36:41.027 Removing: /var/run/dpdk/spdk_pid761130 00:36:41.027 Removing: /var/run/dpdk/spdk_pid761708 00:36:41.027 Removing: /var/run/dpdk/spdk_pid762307 00:36:41.027 Removing: /var/run/dpdk/spdk_pid766874 00:36:41.027 Removing: /var/run/dpdk/spdk_pid772256 00:36:41.027 Removing: /var/run/dpdk/spdk_pid772257 00:36:41.027 Removing: /var/run/dpdk/spdk_pid772258 00:36:41.027 Removing: /var/run/dpdk/spdk_pid776034 00:36:41.027 Removing: /var/run/dpdk/spdk_pid784142 00:36:41.027 Removing: /var/run/dpdk/spdk_pid787957 00:36:41.027 Removing: /var/run/dpdk/spdk_pid793943 00:36:41.027 Removing: /var/run/dpdk/spdk_pid795247 00:36:41.027 Removing: /var/run/dpdk/spdk_pid796602 00:36:41.027 Removing: /var/run/dpdk/spdk_pid797923 00:36:41.027 Removing: /var/run/dpdk/spdk_pid802404 00:36:41.027 Removing: /var/run/dpdk/spdk_pid806736 00:36:41.027 Removing: /var/run/dpdk/spdk_pid810753 00:36:41.027 Removing: /var/run/dpdk/spdk_pid818630 00:36:41.027 Removing: /var/run/dpdk/spdk_pid818638 00:36:41.027 Removing: /var/run/dpdk/spdk_pid823124 00:36:41.027 Removing: /var/run/dpdk/spdk_pid823355 00:36:41.027 Removing: /var/run/dpdk/spdk_pid823580 00:36:41.027 Removing: /var/run/dpdk/spdk_pid824040 00:36:41.027 Removing: /var/run/dpdk/spdk_pid824045 00:36:41.027 Removing: /var/run/dpdk/spdk_pid828525 00:36:41.027 Removing: /var/run/dpdk/spdk_pid829098 00:36:41.027 Removing: /var/run/dpdk/spdk_pid833286 00:36:41.027 Removing: /var/run/dpdk/spdk_pid835955 00:36:41.027 Removing: /var/run/dpdk/spdk_pid841125 00:36:41.027 Removing: /var/run/dpdk/spdk_pid846601 00:36:41.027 Removing: /var/run/dpdk/spdk_pid855261 00:36:41.027 Removing: /var/run/dpdk/spdk_pid862263 00:36:41.027 Removing: /var/run/dpdk/spdk_pid862265 00:36:41.027 Removing: /var/run/dpdk/spdk_pid881544 00:36:41.027 Removing: /var/run/dpdk/spdk_pid882019 00:36:41.027 Removing: /var/run/dpdk/spdk_pid882499 00:36:41.027 Removing: /var/run/dpdk/spdk_pid883177 00:36:41.027 Removing: /var/run/dpdk/spdk_pid883726 00:36:41.027 Removing: /var/run/dpdk/spdk_pid884384 00:36:41.027 Removing: /var/run/dpdk/spdk_pid884868 00:36:41.027 Removing: /var/run/dpdk/spdk_pid885339 00:36:41.027 Removing: /var/run/dpdk/spdk_pid889368 00:36:41.027 Removing: /var/run/dpdk/spdk_pid889610 00:36:41.027 Removing: /var/run/dpdk/spdk_pid895675 00:36:41.027 Removing: /var/run/dpdk/spdk_pid895804 00:36:41.027 Removing: /var/run/dpdk/spdk_pid901185 00:36:41.027 Removing: /var/run/dpdk/spdk_pid905423 00:36:41.027 Removing: /var/run/dpdk/spdk_pid915666 00:36:41.027 Removing: /var/run/dpdk/spdk_pid916139 00:36:41.027 Removing: /var/run/dpdk/spdk_pid920380 00:36:41.027 Removing: /var/run/dpdk/spdk_pid920636 00:36:41.027 Removing: /var/run/dpdk/spdk_pid924657 00:36:41.027 Removing: /var/run/dpdk/spdk_pid930317 00:36:41.027 Removing: /var/run/dpdk/spdk_pid932934 00:36:41.027 Removing: /var/run/dpdk/spdk_pid942823 00:36:41.027 Removing: /var/run/dpdk/spdk_pid951484 00:36:41.027 Removing: /var/run/dpdk/spdk_pid953144 00:36:41.027 Removing: /var/run/dpdk/spdk_pid954023 00:36:41.027 Removing: /var/run/dpdk/spdk_pid970415 00:36:41.027 Removing: /var/run/dpdk/spdk_pid974230 00:36:41.027 Removing: /var/run/dpdk/spdk_pid976909 00:36:41.027 Removing: /var/run/dpdk/spdk_pid984415 00:36:41.027 Removing: /var/run/dpdk/spdk_pid984422 00:36:41.027 Removing: /var/run/dpdk/spdk_pid989443 00:36:41.027 Removing: /var/run/dpdk/spdk_pid991262 00:36:41.027 Removing: /var/run/dpdk/spdk_pid993178 00:36:41.027 Removing: /var/run/dpdk/spdk_pid994420 00:36:41.027 Removing: /var/run/dpdk/spdk_pid996390 00:36:41.027 Removing: /var/run/dpdk/spdk_pid997455 00:36:41.027 Clean 00:36:41.286 07:46:09 -- common/autotest_common.sh@1453 -- # return 0 00:36:41.286 07:46:09 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:41.286 07:46:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:41.286 07:46:09 -- common/autotest_common.sh@10 -- # set +x 00:36:41.286 07:46:09 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:41.286 07:46:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:41.286 07:46:09 -- common/autotest_common.sh@10 -- # set +x 00:36:41.286 07:46:09 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:41.286 07:46:09 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:41.286 07:46:09 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:41.286 07:46:09 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:41.286 07:46:09 -- spdk/autotest.sh@398 -- # hostname 00:36:41.286 07:46:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:41.545 geninfo: WARNING: invalid characters removed from testname! 00:37:03.672 07:46:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:03.672 07:46:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:05.038 07:46:32 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:06.934 07:46:34 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:08.834 07:46:36 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:10.732 07:46:38 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:12.631 07:46:40 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:12.631 07:46:40 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:12.631 07:46:40 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:12.631 07:46:40 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:12.631 07:46:40 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:12.631 07:46:40 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:12.631 + [[ -n 465403 ]] 00:37:12.631 + sudo kill 465403 00:37:12.639 [Pipeline] } 00:37:12.655 [Pipeline] // stage 00:37:12.661 [Pipeline] } 00:37:12.676 [Pipeline] // timeout 00:37:12.681 [Pipeline] } 00:37:12.696 [Pipeline] // catchError 00:37:12.702 [Pipeline] } 00:37:12.718 [Pipeline] // wrap 00:37:12.725 [Pipeline] } 00:37:12.738 [Pipeline] // catchError 00:37:12.747 [Pipeline] stage 00:37:12.750 [Pipeline] { (Epilogue) 00:37:12.764 [Pipeline] catchError 00:37:12.765 [Pipeline] { 00:37:12.779 [Pipeline] echo 00:37:12.781 Cleanup processes 00:37:12.787 [Pipeline] sh 00:37:13.069 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:13.069 1027047 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:13.083 [Pipeline] sh 00:37:13.364 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:13.364 ++ grep -v 'sudo pgrep' 00:37:13.364 ++ awk '{print $1}' 00:37:13.364 + sudo kill -9 00:37:13.364 + true 00:37:13.376 [Pipeline] sh 00:37:13.657 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:25.863 [Pipeline] sh 00:37:26.146 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:26.146 Artifacts sizes are good 00:37:26.160 [Pipeline] archiveArtifacts 00:37:26.167 Archiving artifacts 00:37:26.296 [Pipeline] sh 00:37:26.576 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:26.591 [Pipeline] cleanWs 00:37:26.600 [WS-CLEANUP] Deleting project workspace... 00:37:26.600 [WS-CLEANUP] Deferred wipeout is used... 00:37:26.607 [WS-CLEANUP] done 00:37:26.609 [Pipeline] } 00:37:26.631 [Pipeline] // catchError 00:37:26.645 [Pipeline] sh 00:37:26.926 + logger -p user.info -t JENKINS-CI 00:37:26.935 [Pipeline] } 00:37:26.950 [Pipeline] // stage 00:37:26.955 [Pipeline] } 00:37:26.971 [Pipeline] // node 00:37:26.976 [Pipeline] End of Pipeline 00:37:27.015 Finished: SUCCESS